├── .github ├── dependabot.yml └── workflows │ └── auto-publish.yml ├── .gitignore ├── .pr-preview.json ├── CONTRIBUTING.md ├── LICENSE.md ├── README.md ├── index.bs ├── security-privacy-questionnaire.md └── w3c.json /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | # See the documentation at 2 | # https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates 3 | version: 2 4 | updates: 5 | # Update actions used by .github/workflows in this repository. 6 | - package-ecosystem: "github-actions" 7 | directory: "/" 8 | schedule: 9 | interval: "weekly" 10 | groups: 11 | actions-org: # Groups all Github-authored actions into a single PR. 12 | patterns: ["actions/*"] 13 | -------------------------------------------------------------------------------- /.github/workflows/auto-publish.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | on: 3 | pull_request: {} 4 | push: 5 | branches: [main] 6 | jobs: 7 | main: 8 | name: Build, Validate and Deploy 9 | runs-on: ubuntu-latest 10 | permissions: 11 | contents: write 12 | steps: 13 | - uses: actions/checkout@v4 14 | - uses: w3c/spec-prod@v2 15 | with: 16 | GH_PAGES_BRANCH: gh-pages 17 | BUILD_FAIL_ON: warning 18 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | index.html 2 | -------------------------------------------------------------------------------- /.pr-preview.json: -------------------------------------------------------------------------------- 1 | { 2 | "src_file": "index.bs", 3 | "type": "bikeshed", 4 | "params": { 5 | "force": 1 6 | } 7 | } 8 | 9 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # W3C Web Machine Learning Community Group 2 | 3 | This repository is being used for work in the W3C Web Machine Learning Community Group, governed by the [W3C Community License 4 | Agreement (CLA)](http://www.w3.org/community/about/agreements/cla/). To make substantive contributions, 5 | you must join the CG. 6 | 7 | If you are not the sole contributor to a contribution (pull request), please identify all 8 | contributors in the pull request comment. 9 | 10 | To add a contributor (other than yourself, that's automatic), mark them one per line as follows: 11 | 12 | ``` 13 | +@github_username 14 | ``` 15 | 16 | If you added a contributor by mistake, you can remove them in a comment with: 17 | 18 | ``` 19 | -@github_username 20 | ``` 21 | 22 | If you are making a pull request on behalf of someone else but you had no part in designing the 23 | feature, you can remove yourself with the above syntax. 24 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | All Reports in this Repository are licensed by Contributors 2 | under the 3 | [W3C Software and Document License](https://www.w3.org/copyright/software-license/). 4 | 5 | Contributions to Specifications are made under the 6 | [W3C CLA](https://www.w3.org/community/about/agreements/cla/). 7 | 8 | Contributions to Test Suites are made under the 9 | [W3C 3-clause BSD License](https://www.w3.org/copyright/3-clause-bsd-license-2008/). 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Explainer for the Prompt API 2 | 3 | _This proposal is an early design sketch by the Chrome built-in AI team to describe the problem below and solicit feedback on the proposed solution. It has not been approved to ship in Chrome._ 4 | 5 | Browsers and operating systems are increasingly expected to gain access to a language model. ([Example](https://developer.chrome.com/docs/ai/built-in), [example](https://blogs.windows.com/windowsdeveloper/2024/05/21/unlock-a-new-era-of-innovation-with-windows-copilot-runtime-and-copilot-pcs/), [example](https://www.apple.com/apple-intelligence/).) Language models are known for their versatility. With enough creative [prompting](https://developers.google.com/machine-learning/resources/prompt-eng), they can help accomplish tasks as diverse as: 6 | 7 | * Classification, tagging, and keyword extraction of arbitrary text; 8 | * Helping users compose text, such as blog posts, reviews, or biographies; 9 | * Summarizing, e.g. of articles, user reviews, or chat logs; 10 | * Generating titles or headlines from article contents 11 | * Answering questions based on the unstructured contents of a web page 12 | * Translation between languages 13 | * Proofreading 14 | 15 | The Chrome built-in AI team and the Web Machine Learning Community Group are exploring purpose-built APIs for some of these use cases (namely [translator / language detector](https://github.com/webmachinelearning/translation-api), [summarizer / writer / rewriter](https://github.com/webmachinelearning/writing-assistance-apis), and [proofreader](https://github.com/webmachinelearning/proposals/issues/7)). This proposal additionally exploring a general-purpose "prompt API" which allows web developers to prompt a language model directly. This gives web developers access to many more capabilities, at the cost of requiring them to do their own prompt engineering. 16 | 17 | Currently, web developers wishing to use language models must either call out to cloud APIs, or bring their own and run them using technologies like WebAssembly and WebGPU. By providing access to the browser or operating system's existing language model, we can provide the following benefits compared to cloud APIs: 18 | 19 | * Local processing of sensitive data, e.g. allowing websites to combine AI features with end-to-end encryption. 20 | * Potentially faster results, since there is no server round-trip involved. 21 | * Offline usage. 22 | * Lower API costs for web developers. 23 | * Allowing hybrid approaches, e.g. free users of a website use on-device AI whereas paid users use a more powerful API-based model. 24 | 25 | Similarly, compared to bring-your-own-AI approaches, using a built-in language model can save the user's bandwidth, likely benefit from more optimizations, and have a lower barrier to entry for web developers. 26 | 27 | ## Goals 28 | 29 | Our goals are to: 30 | 31 | * Provide web developers a uniform JavaScript API for accessing browser-provided language models. 32 | * Abstract away specific details of the language model in question as much as possible, e.g. tokenization, system messages, or control tokens. 33 | * Guide web developers to gracefully handle failure cases, e.g. no browser-provided model being available. 34 | * Allow a variety of implementation strategies, including on-device or cloud-based models, while keeping these details abstracted from developers. 35 | 36 | The following are explicit non-goals: 37 | 38 | * We do not intend to force every browser to ship or expose a language model; in particular, not all devices will be capable of storing or running one. It would be conforming to implement this API by always signaling that no language model is available, or to implement this API entirely by using cloud services instead of on-device models. 39 | * We do not intend to provide guarantees of language model quality, stability, or interoperability between browsers. In particular, we cannot guarantee that the models exposed by these APIs are particularly good at any given use case. These are left as quality-of-implementation issues, similar to the [shape detection API](https://wicg.github.io/shape-detection-api/). (See also a [discussion of interop](https://www.w3.org/reports/ai-web-impact/#interop) in the W3C "AI & the Web" document.) 40 | 41 | The following are potential goals we are not yet certain of: 42 | 43 | * Allow web developers to know, or control, whether language model interactions are done on-device or using cloud services. This would allow them to guarantee that any user data they feed into this API does not leave the device, which can be important for privacy purposes. Similarly, we might want to allow developers to request on-device-only language models, in case a browser offers both varieties. 44 | * Allow web developers to know some identifier for the language model in use, separate from the browser version. This would allow them to allowlist or blocklist specific models to maintain a desired level of quality, or restrict certain use cases to a specific model. 45 | 46 | Both of these potential goals could pose challenges to interoperability, so we want to investigate more how important such functionality is to developers to find the right tradeoff. 47 | 48 | ## Examples 49 | 50 | ### Zero-shot prompting 51 | 52 | In this example, a single string is used to prompt the API, which is assumed to come from the user. The returned response is from the language model. 53 | 54 | ```js 55 | const session = await LanguageModel.create(); 56 | 57 | // Prompt the model and wait for the whole result to come back. 58 | const result = await session.prompt("Write me a poem."); 59 | console.log(result); 60 | 61 | // Prompt the model and stream the result: 62 | const stream = session.promptStreaming("Write me an extra-long poem."); 63 | for await (const chunk of stream) { 64 | console.log(chunk); 65 | } 66 | ``` 67 | 68 | ### System prompts 69 | 70 | The language model can be configured with a special "system prompt" which gives it the context for future interactions. This is done using the `initialPrompts` option and the "chat completions API" `{ role, content }` format, which are expanded upon in [the following section](#n-shot-prompting). 71 | 72 | ```js 73 | const session = await LanguageModel.create({ 74 | initialPrompts: [{ role: "system", content: "Pretend to be an eloquent hamster." }] 75 | }); 76 | 77 | console.log(await session.prompt("What is your favorite food?")); 78 | ``` 79 | 80 | The system prompt is special, in that the language model will not respond to it, and it will be preserved even if the context window otherwise overflows due to too many calls to `prompt()`. 81 | 82 | If the system prompt is too large, then the promise will be rejected with a `QuotaExceededError` exception. See [below](#tokenization-context-window-length-limits-and-overflow) for more details on token counting and this new exception type. 83 | 84 | ### N-shot prompting 85 | 86 | If developers want to provide examples of the user/assistant interaction, they can add more entries to the `initialPrompts` array, using the `"user"` and `"assistant"` roles: 87 | 88 | ```js 89 | const session = await LanguageModel.create({ 90 | initialPrompts: [ 91 | { role: "system", content: "Predict up to 5 emojis as a response to a comment. Output emojis, comma-separated." }, 92 | { role: "user", content: "This is amazing!" }, 93 | { role: "assistant", content: "❤️, ➕" }, 94 | { role: "user", content: "LGTM" }, 95 | { role: "assistant", content: "👍, 🚢" } 96 | ] 97 | }); 98 | 99 | // Clone an existing session for efficiency, instead of recreating one each time. 100 | async function predictEmoji(comment) { 101 | const freshSession = await session.clone(); 102 | return await freshSession.prompt(comment); 103 | } 104 | 105 | const result1 = await predictEmoji("Back to the drawing board"); 106 | 107 | const result2 = await predictEmoji("This code is so good you should get promoted"); 108 | ``` 109 | 110 | (Note that merely creating a session does not cause any new responses from the language model. We need to call `prompt()` or `promptStreaming()` to get a response.) 111 | 112 | Some details on error cases: 113 | 114 | * Placing the `{ role: "system" }` prompt anywhere besides at the 0th position in `initialPrompts` will reject with a `TypeError`. 115 | * If the combined token length of all the initial prompts is too large, then the promise will be rejected with a [`QuotaExceededError` exception](#tokenization-context-window-length-limits-and-overflow). 116 | 117 | ### Customizing the role per prompt 118 | 119 | Our examples so far have provided `prompt()` and `promptStreaming()` with a single string. Such cases assume messages will come from the user role. These methods can also take arrays of objects in the `{ role, content }` format, in case you want to provide multiple user or assistant messages before getting another assistant message: 120 | 121 | ```js 122 | const multiUserSession = await LanguageModel.create({ 123 | initialPrompts: [{ 124 | role: "system", 125 | content: "You are a mediator in a discussion between two departments." 126 | }] 127 | }); 128 | 129 | const result = await multiUserSession.prompt([ 130 | { role: "user", content: "Marketing: We need more budget for advertising campaigns." }, 131 | { role: "user", content: "Finance: We need to cut costs and advertising is on the list." }, 132 | { role: "assistant", content: "Let's explore a compromise that satisfies both departments." } 133 | ]); 134 | 135 | // `result` will contain a compromise proposal from the assistant. 136 | ``` 137 | 138 | Because of their special behavior of being preserved on context window overflow, system prompts cannot be provided this way. 139 | 140 | ### Emulating tool use or function-calling via assistant-role prompts 141 | 142 | A special case of the above is using the assistant role to emulate tool use or function-calling, by marking a response as coming from the assistant side of the conversation: 143 | 144 | ```js 145 | const session = await LanguageModel.create({ 146 | initialPrompts: [{ 147 | role: "system", 148 | content: ` 149 | You are a helpful assistant. You have access to the following tools: 150 | - calculator: A calculator. To use it, write "CALCULATOR: " where is a valid mathematical expression. 151 | ` 152 | }] 153 | }); 154 | 155 | async function promptWithCalculator(prompt) { 156 | const result = await session.prompt(prompt); 157 | 158 | // Check if the assistant wants to use the calculator tool. 159 | const match = /^CALCULATOR: (.*)$/.exec(result); 160 | if (match) { 161 | const expression = match[1]; 162 | const mathResult = evaluateMathExpression(expression); 163 | 164 | // Add the result to the session so it's in context going forward. 165 | await session.prompt([{ role: "assistant", content: mathResult }]); 166 | 167 | // Return it as if that's what the assistant said to the user. 168 | return mathResult; 169 | } 170 | 171 | // The assistant didn't want to use the calculator. Just return its response. 172 | return result; 173 | } 174 | 175 | console.log(await promptWithCalculator("What is 2 + 2?")); 176 | ``` 177 | 178 | We'll likely explore more specific APIs for tool- and function-calling in the future; follow along in [issue #7](https://github.com/webmachinelearning/prompt-api/issues/7). 179 | 180 | ### Multimodal inputs 181 | 182 | All of the above examples have been of text prompts. Some language models also support other inputs. Our design initially includes the potential to support images and audio clips as inputs. This is done by using objects in the form `{ type: "image", content }` and `{ type: "audio", content }` instead of strings. The `content` values can be the following: 183 | 184 | * For image inputs: [`ImageBitmapSource`](https://html.spec.whatwg.org/#imagebitmapsource), i.e. `Blob`, `ImageData`, `ImageBitmap`, `VideoFrame`, `OffscreenCanvas`, `HTMLImageElement`, `SVGImageElement`, `HTMLCanvasElement`, or `HTMLVideoElement` (will get the current frame). Also raw bytes via `BufferSource` (i.e. `ArrayBuffer` or typed arrays). 185 | 186 | * For audio inputs: for now, `Blob`, `AudioBuffer`, or raw bytes via `BufferSource`. Other possibilities we're investigating include `HTMLAudioElement`, `AudioData`, and `MediaStream`, but we're not yet sure if those are suitable to represent "clips": most other uses of them on the web platform are able to handle streaming data. 187 | 188 | Sessions that will include these inputs need to be created using the `expectedInputs` option, to ensure that any necessary downloads are done as part of session creation, and that if the model is not capable of such multimodal prompts, the session creation fails. (See also the below discussion of [expected input languages](#multilingual-content-and-expected-input-languages), not just expected input types.) 189 | 190 | A sample of using these APIs: 191 | 192 | ```js 193 | const session = await LanguageModel.create({ 194 | // { type: "text" } is not necessary to include explicitly, unless 195 | // you also want to include expected input languages for text. 196 | expectedInputs: [ 197 | { type: "audio" }, 198 | { type: "image" } 199 | ] 200 | }); 201 | 202 | const referenceImage = await (await fetch("/reference-image.jpeg")).blob(); 203 | const userDrawnImage = document.querySelector("canvas"); 204 | 205 | const response1 = await session.prompt([{ 206 | role: "user", 207 | content: [ 208 | { type: "text", value: "Give a helpful artistic critique of how well the second image matches the first:" }, 209 | { type: "image", value: referenceImage }, 210 | { type: "image", value: userDrawnImage } 211 | ] 212 | }]); 213 | 214 | console.log(response1); 215 | 216 | const audioBlob = await captureMicrophoneInput({ seconds: 10 }); 217 | 218 | const response2 = await session.prompt([{ 219 | role: "user", 220 | content: [ 221 | { type: "text", value: "My response to your critique:" }, 222 | { type: "audio", value: audioBlob } 223 | ] 224 | }]); 225 | ``` 226 | 227 | Note how once we move to multimodal prompting, the prompt format becomes more explicit: 228 | 229 | * We must always pass an array of messages, instead of a single string value. 230 | * Each message must have a `role` property: unlike with the string shorthand, `"user"` is no longer assumed. 231 | * The `content` property must be an array of content, if it contains any multimodal content. 232 | 233 | This extra ceremony is necessary to make it clear that we are sending a single message that contains multimodal content, versus sending multiple messages, one per each piece of content. To avoid such confusion, the multimodal format has fewer defaults and shorthands than if you interact with the API using only text. (See some discussion in [issue #89](https://github.com/webmachinelearning/prompt-api/pull/89).) 234 | 235 | To illustrate, the following extension of our above [multi-user example](#customizing-the-role-per-prompt) has a similar sequence of text + image + image values compared to our artistic critique example. However, it uses a multi-message structure instead of the artistic critique example's single-message structure, so the model will interpret it differently: 236 | 237 | ```js 238 | const response = await session.prompt([ 239 | { 240 | role: "user", 241 | content: "Your compromise just made the discussion more heated. The two departments drew up posters to illustrate their strategies' advantages:" 242 | }, 243 | { 244 | role: "user", 245 | content: [{ type: "image", value: brochureFromTheMarketingDepartment }] 246 | }, 247 | { 248 | role: "user", 249 | content: [{ type: "image", value: brochureFromTheFinanceDepartment }] 250 | } 251 | ]); 252 | ``` 253 | 254 | Details: 255 | 256 | * Cross-origin data that has not been exposed using the `Access-Control-Allow-Origin` header cannot be used with the prompt API, and will reject with a `"SecurityError"` `DOMException`. This applies to `HTMLImageElement`, `SVGImageElement`, `HTMLVideoElement`, `HTMLCanvasElement`, and `OffscreenCanvas`. Note that this is more strict than `createImageBitmap()`, which has a tainting mechanism which allows creating opaque image bitmaps from unexposed cross-origin resources. For the prompt API, such resources will just fail. This includes attempts to use cross-origin-tainted canvases. 257 | 258 | * Raw-bytes cases (`Blob` and `BufferSource`) will apply the appropriate sniffing rules ([for images](https://mimesniff.spec.whatwg.org/#rules-for-sniffing-images-specifically), [for audio](https://mimesniff.spec.whatwg.org/#rules-for-sniffing-audio-and-video-specifically)) and reject with an `"EncodingError"` `DOMException` if the format is not supported or there is some error decoding the data. This behavior is similar to that of `createImageBitmap()`. 259 | 260 | * Animated images will be required to snapshot the first frame (like `createImageBitmap()`). In the future, animated image input may be supported via some separate opt-in, similar to video clip input. But we don't want interoperability problems from some implementations supporting animated images and some not, in the initial version. 261 | 262 | * For `HTMLVideoElement`, even a single frame might not yet be downloaded when the prompt API is called. In such cases, calling into the prompt API will force at least a single frame's worth of video to download. (The intent is to behave the same as `createImageBitmap(videoEl)`.) 263 | 264 | * Attempting to supply an invalid combination, e.g. `{ type: "audio", value: anImageBitmap }`, `{ type: "image", value: anAudioBuffer }`, or `{ type: "text", value: anArrayBuffer }`, will reject with a `TypeError`. 265 | 266 | * For now, using the `"assistant"` role with an image or audio prompt will reject with a `"NotSupportedError"` `DOMException`. (As we explore multimodal outputs, this restriction might be lifted in the future.) 267 | 268 | Future extensions may include more ambitious multimodal inputs, such as video clips, or realtime audio or video. (Realtime might require a different API design, more based around events or streams instead of messages.) 269 | 270 | ### Structured output with JSON schema or RegExp constraints 271 | 272 | To help with programmatic processing of language model responses, the prompt API supports constraining the response with either a JSON schema object or a `RegExp` passed as the `responseConstraint` option: 273 | 274 | ```js 275 | const schema = { 276 | type: "object", 277 | required: ["rating"], 278 | additionalProperties: false, 279 | properties: { 280 | rating: { 281 | type: "number", 282 | minimum: 0, 283 | maximum: 5, 284 | }, 285 | }, 286 | }; 287 | 288 | // Prompt the model and wait for the JSON response to come back. 289 | const result = await session.prompt("Summarize this feedback into a rating between 0-5: "+ 290 | "The food was delicious, service was excellent, will recommend.", 291 | { responseConstraint: schema } 292 | ); 293 | 294 | const { rating } = JSON.parse(result); 295 | console.log(rating); 296 | ``` 297 | 298 | If the input value is a valid JSON schema object, but uses JSON schema features not supported by the user agent, the method will error with a `"NotSupportedError"` `DOMException`. 299 | 300 | The result value returned is a string that can be parsed with `JSON.parse()`. If the user agent is unable to produce a response that is compliant with the schema, the method will error with a `"SyntaxError"` `DOMException`. 301 | 302 | ```js 303 | const emailRegExp = /^[a-zA-Z0-9.!#$%&'*+\/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$/; 304 | 305 | const emailAddress = await session.prompt( 306 | `Create a fictional email address for ${characterName}.`, 307 | { responseConstraint: emailRegExp } 308 | ); 309 | 310 | console.log(emailAddress); 311 | ``` 312 | 313 | The returned value will be a string that matches the input `RegExp`. If the user agent is unable to produce a response that matches, the method will error with a `"SyntaxError"` `DOMException`. 314 | 315 | If a value that is neither a `RegExp` object or a valid JSON schema object is given, the method will error with a `TypeError`. 316 | 317 | ### Appending messages without prompting for a response 318 | 319 | In some cases, you know which messages you'll want to use to populate the session, but not yet the final message before you prompt the model for a response. Because processing messages can take some time (especially for multimodal inputs), it's useful to be able to send such messages to the model ahead of time. This allows it to get a head-start on processing, while you wait for the right time to prompt for a response. 320 | 321 | (The `initialPrompts` array serves this purpose at session creation time, but this can be useful after session creation as well, as we show in the example below.) 322 | 323 | For such cases, in addition to the `prompt()` and `promptStreaming()` methods, the prompt API provides an `append()` method, which takes the same message format as `prompt()`. Here's an example of how that could be useful: 324 | 325 | ```js 326 | const session = await LanguageModel.create({ 327 | initialPrompts: [{ 328 | role: "system", 329 | content: "You are a skilled analyst who correlates patterns across multiple images." 330 | }], 331 | expectedInputs: [{ type: "image" }] 332 | }); 333 | 334 | fileUpload.onchange = async (e) => { 335 | await session.append([{ 336 | role: "user", 337 | content: [ 338 | { type: "text", value: `Here's one image. Notes: ${fileNotesInput.value}` }, 339 | { type: "image", value: fileUpload.files[0] } 340 | ] 341 | }]); 342 | }; 343 | 344 | analyzeButton.onclick = async (e) => { 345 | analysisResult.textContent = await session.prompt(userQuestionInput.value); 346 | }; 347 | ``` 348 | 349 | The promise returned by `append()` will reject if the prompt cannot be appended (e.g., too big, invalid modalities for the session, etc.), or will fulfill once the prompt has been validated, processed, and appended to the session. 350 | 351 | Note that `append()` can also cause [overflow](#tokenization-context-window-length-limits-and-overflow), in which case it will evict the oldest non-system prompts from the session and fire the `"quotaoverflow"` event. 352 | 353 | ### Configuration of per-session parameters 354 | 355 | In addition to the `initialPrompts` option shown above, the currently-configurable model parameters are [temperature](https://huggingface.co/blog/how-to-generate#sampling) and [top-K](https://huggingface.co/blog/how-to-generate#top-k-sampling). The `params()` API gives the default and maximum values for these parameters. 356 | 357 | _However, see [issue #42](https://github.com/webmachinelearning/prompt-api/issues/42): sampling hyperparameters are not universal among models._ 358 | 359 | ```js 360 | const customSession = await LanguageModel.create({ 361 | temperature: 0.8, 362 | topK: 10 363 | }); 364 | 365 | const params = await LanguageModel.params(); 366 | const conditionalSession = await LanguageModel.create({ 367 | temperature: isCreativeTask ? params.defaultTemperature * 1.1 : params.defaultTemperature * 0.8, 368 | topK: isGeneratingIdeas ? params.maxTopK : params.defaultTopK 369 | }); 370 | ``` 371 | 372 | If the language model is not available at all in this browser, `params()` will fulfill with `null`. 373 | 374 | Error-handling behavior: 375 | 376 | * If values below 0 are passed for `temperature`, then `create()` will return a promise rejected with a `RangeError`. 377 | * If values above `maxTemperature` are passed for `temperature`, then `create()` will clamp to `maxTemperature`. (`+Infinity` is specifically allowed, as a way of requesting maximum temperature.) 378 | * If values below 1 are passed for `topK`, then `create()` will return a promise rejected with a `RangeError`. 379 | * If values above `maxTopK` are passed for `topK`, then `create()` will clamp to `maxTopK`. (This includes `+Infinity` and numbers above `Number.MAX_SAFE_INTEGER`.) 380 | * If fractional values are passed for `topK`, they are rounded down (using the usual [IntegerPart](https://webidl.spec.whatwg.org/#abstract-opdef-integerpart) algorithm for web specs). 381 | 382 | ### Session persistence and cloning 383 | 384 | Each language model session consists of a persistent series of interactions with the model: 385 | 386 | ```js 387 | const session = await LanguageModel.create({ 388 | initialPrompts: [{ 389 | role: "system", 390 | content: "You are a friendly, helpful assistant specialized in clothing choices." 391 | }] 392 | }); 393 | 394 | const result = await session.prompt(` 395 | What should I wear today? It's sunny and I'm unsure between a t-shirt and a polo. 396 | `); 397 | 398 | console.log(result); 399 | 400 | const result2 = await session.prompt(` 401 | That sounds great, but oh no, it's actually going to rain! New advice?? 402 | `); 403 | ``` 404 | 405 | Multiple unrelated continuations of the same prompt can be set up by creating a session and then cloning it: 406 | 407 | ```js 408 | const session = await LanguageModel.create({ 409 | initialPrompts: [{ 410 | role: "system", 411 | content: "You are a friendly, helpful assistant specialized in clothing choices." 412 | }] 413 | }); 414 | 415 | const session2 = await session.clone(); 416 | ``` 417 | 418 | The clone operation can be aborted using an `AbortSignal`: 419 | 420 | ```js 421 | const controller = new AbortController(); 422 | const session2 = await session.clone({ signal: controller.signal }); 423 | ``` 424 | 425 | ### Session destruction 426 | 427 | A language model session can be destroyed, either by using an `AbortSignal` passed to the `create()` method call: 428 | 429 | ```js 430 | const controller = new AbortController(); 431 | stopButton.onclick = () => controller.abort(); 432 | 433 | const session = await LanguageModel.create({ signal: controller.signal }); 434 | ``` 435 | 436 | or by calling `destroy()` on the session: 437 | 438 | ```js 439 | stopButton.onclick = () => session.destroy(); 440 | ``` 441 | 442 | Destroying a session will have the following effects: 443 | 444 | * If done before the promise returned by `create()` is settled: 445 | 446 | * Stop signaling any ongoing download progress for the language model. (The browser may also abort the download, or may continue it. Either way, no further `downloadprogress` events will fire.) 447 | 448 | * Reject the `create()` promise. 449 | 450 | * Otherwise: 451 | 452 | * Reject any ongoing calls to `prompt()`. 453 | 454 | * Error any `ReadableStream`s returned by `promptStreaming()`. 455 | 456 | * Most importantly, destroying the session allows the user agent to unload the language model from memory, if no other APIs or sessions are using it. 457 | 458 | In all cases the exception used for rejecting promises or erroring `ReadableStream`s will be an `"AbortError"` `DOMException`, or the given abort reason. 459 | 460 | The ability to manually destroy a session allows applications to free up memory without waiting for garbage collection, which can be useful since language models can be quite large. 461 | 462 | ### Aborting a specific prompt 463 | 464 | Specific calls to `prompt()` or `promptStreaming()` can be aborted by passing an `AbortSignal` to them: 465 | 466 | ```js 467 | const controller = new AbortController(); 468 | stopButton.onclick = () => controller.abort(); 469 | 470 | const result = await session.prompt("Write me a poem", { signal: controller.signal }); 471 | ``` 472 | 473 | Note that because sessions are stateful, and prompts can be queued, aborting a specific prompt is slightly complicated: 474 | 475 | * If the prompt is still queued behind other prompts in the session, then it will be removed from the queue, and the returned promise will be rejected with an `"AbortError"` `DOMException`. 476 | * If the prompt is being currently responded to by the model, then it will be aborted, the prompt/response pair will be removed from the session, and the returned promise will be rejected with an `"AbortError"` `DOMException`. 477 | * If the prompt has already been fully responded to by the model, then attempting to abort the prompt will do nothing. 478 | 479 | Similarly, the `append()` operation can also be aborted. In this case the behavior is: 480 | 481 | * If the append is queued behind other appends in the session, then it will be removed from the queue, and the returned promise will be rejected with an `"AbortError"` `DOMException`. 482 | * If the append operation is currently ongoing, then it will be aborted, any part of the prompt that was appended so far will be removed from the session, and the returned promise will be rejected with an `"AbortError"` `DOMException`. 483 | * If the append operation is complete (i.e., the returned promise has resolved), then attempting to abort it will do nothing. This includes all the following states: 484 | * The append operation is complete, but a prompt generation step has not yet triggered. 485 | * The append operation is complete, and a prompt generation step is processing. 486 | * The append operation is complete, and a prompt generation step has used it to produce a result. 487 | 488 | Finally, note that if either prompting or appending has caused an [overflow](#tokenization-context-window-length-limits-and-overflow), aborting the operation does not re-introduce the overflowed messages into the session. 489 | 490 | ### Tokenization, context window length limits, and overflow 491 | 492 | A given language model session will have a maximum number of tokens it can process. Developers can check their current usage and progress toward that limit by using the following properties on the session object: 493 | 494 | ```js 495 | console.log(`${session.inputUsage} tokens used, out of ${session.inputQuota} tokens available.`); 496 | ``` 497 | 498 | To know how many tokens a string will consume, without actually processing it, developers can use the `measureInputUsage()` method: 499 | 500 | ```js 501 | const usage = await session.measureInputUsage(promptString); 502 | ``` 503 | 504 | Some notes on this API: 505 | 506 | * We do not expose the actual tokenization to developers since that would make it too easy to depend on model-specific details. 507 | * Implementations must include in their count any control tokens that will be necessary to process the prompt, e.g. ones indicating the start or end of the input. 508 | * The counting process can be aborted by passing an `AbortSignal`, i.e. `session.measureInputUsage(promptString, { signal })`. 509 | * We use the phrases "input usage" and "input quota" in the API, to avoid being specific to the current language model tokenization paradigm. In the future, even if we change paradigms, we anticipate some concept of usage and quota still being applicable, even if it's just string length. 510 | 511 | It's possible to send a prompt that causes the context window to overflow. That is, consider a case where `session.measureInputUsage(promptString) > session.inputQuota - session.inputUsage` before calling `session.prompt(promptString)`, and then the web developer calls `session.prompt(promptString)` anyway. In such cases, the initial portions of the conversation with the language model will be removed, one prompt/response pair at a time, until enough tokens are available to process the new prompt. The exception is the [system prompt](#system-prompts), which is never removed. 512 | 513 | Such overflows can be detected by listening for the `"quotaoverflow"` event on the session: 514 | 515 | ```js 516 | session.addEventListener("quotaoverflow", () => { 517 | console.log("We've gone past the quota, and some inputs will be dropped!"); 518 | }); 519 | ``` 520 | 521 | If it's not possible to remove enough tokens from the conversation history to process the new prompt, then the `prompt()` or `promptStreaming()` call will fail with a `QuotaExceededError` exception and nothing will be removed. This is a proposed new type of exception, which subclasses `DOMException`, and replaces the web platform's existing `"QuotaExceededError"` `DOMException`. See [whatwg/webidl#1465](https://github.com/whatwg/webidl/pull/1465) for this proposal. For our purposes, the important part is that it has the following properties: 522 | 523 | * `requested`: how many tokens the input consists of 524 | * `quota`: how many tokens were available (which will be less than `requested`, and equal to the value of `session.inputQuota - session.inputUsage` at the time of the call) 525 | 526 | ### Multilingual content and expected input languages 527 | 528 | The default behavior for a language model session assumes that the input languages are unknown. In this case, implementations will use whatever "base" capabilities they have available for the language model, and might throw `"NotSupportedError"` `DOMException`s if they encounter languages they don't support. 529 | 530 | It's better practice, if possible, to supply the `create()` method with information about the expected input languages. This allows the implementation to download any necessary supporting material, such as fine-tunings or safety-checking models, and to immediately reject the promise returned by `create()` if the web developer needs to use languages that the browser is not capable of supporting: 531 | 532 | ```js 533 | const session = await LanguageModel.create({ 534 | initialPrompts: [{ 535 | role: "system", 536 | content: ` 537 | You are a foreign-language tutor for Japanese. The user is Korean. If necessary, either you or 538 | the user might "break character" and ask for or give clarification in Korean. But by default, 539 | prefer speaking in Japanese, and return to the Japanese conversation once any sidebars are 540 | concluded. 541 | ` 542 | }], 543 | expectedInputs: [{ 544 | type: "text", 545 | languages: ["en" /* for the system prompt */, "ja", "ko"] 546 | }], 547 | // See below section 548 | expectedOutputs: [{ 549 | type: "text", 550 | languages: ["ja", "ko"] 551 | }], 552 | }); 553 | ``` 554 | 555 | The expected input languages are supplied alongside the [expected input types](#multimodal-inputs), and can vary per type. Our above example assumes the default of `type: "text"`, but more complicated combinations are possible, e.g.: 556 | 557 | ```js 558 | const session = await LanguageModel.create({ 559 | expectedInputs: [ 560 | // Be sure to download any material necessary for English and Japanese text 561 | // prompts, or fail-fast if the model cannot support that. 562 | { type: "text", languages: ["en", "ja"] }, 563 | 564 | // `languages` omitted: audio input processing will be best-effort based on 565 | // the base model's capability. 566 | { type: "audio" }, 567 | 568 | // Be sure to download any material necessary for OCRing French text in 569 | // images, or fail-fast if the model cannot support that. 570 | { type: "image", languages: ["fr"] } 571 | ] 572 | }); 573 | ``` 574 | 575 | Note that the expected input languages do not affect the context or prompt the language model sees; they only impact the process of setting up the session, performing appropriate downloads, and failing creation if those input languages are unsupported. 576 | 577 | If you want to check the availability of a given `expectedInputs` configuration before initiating session creation, you can use the `LanguageModel.availability()` method: 578 | 579 | ```js 580 | const availability = await LanguageModel.availability({ 581 | expectedInputs: [ 582 | { type: "text", languages: ["en", "ja"] }, 583 | { type: "audio", languages: ["en", "ja"] } 584 | ] 585 | }); 586 | 587 | // `availability` will be one of "unavailable", "downloadable", "downloading", or "available". 588 | ``` 589 | 590 | ### Expected output languages 591 | 592 | In general, what output language the model responds in will be governed by the language model's own decisions. For example, a prompt such as "Please say something in French" could produce "Bonjour" or it could produce "I'm sorry, I don't know French". 593 | 594 | However, if you know ahead of time what languages you are hoping for the language model to output, it's best practice to use the `expectedOutputs` option to `LanguageModel.create()` to indicate them. This allows the implementation to download any necessary supporting material for those output languages, and to immediately reject the returned promise if it's known that the model cannot support that language: 595 | 596 | ```js 597 | const session = await LanguageModel.create({ 598 | initialPrompts: [{ 599 | role: "system", 600 | content: `You are a helpful, harmless French chatbot.` 601 | }], 602 | expectedInputs: [ 603 | { type: "text", languages: ["en" /* for the system prompt */, "fr"] } 604 | ], 605 | expectedOutputs: [ 606 | { type: "text", languages: ["fr"] } 607 | ] 608 | }); 609 | ``` 610 | 611 | As with `expectedInputs`, specifying a given language in `expectedOutputs` does not actually influence the language model's output. It's only expressing an expectation that can help set up the session, perform downloads, and fail creation if necessary. And as with `expectedInputs`, you can use `LanguageModel.availability()` to check ahead of time, before creating a session. 612 | 613 | (Note that presently, the prompt API does not support multimodal outputs, so including anything array entries with `type`s other than `"text"` will always fail. However, we've chosen this general shape so that in the future, if multimodal output support is added, it fits into the API naturally.) 614 | 615 | ### Testing available options before creation 616 | 617 | In the simple case, web developers should call `LanguageModel.create()`, and handle failures gracefully. 618 | 619 | However, if the web developer wants to provide a differentiated user experience, which lets users know ahead of time that the feature will not be possible or might require a download, they can use the promise-returning `LanguageModel.availability()` method. This method lets developers know, before calling `create()`, what is possible with the implementation. 620 | 621 | The method will return a promise that fulfills with one of the following availability values: 622 | 623 | * "`unavailable`" means that the implementation does not support the requested options, or does not support prompting a language model at all. 624 | * "`downloadable`" means that the implementation supports the requested options, but it will have to download something (e.g. the language model itself, or a fine-tuning) before it can create a session using those options. 625 | * "`downloading`" means that the implementation supports the requested options, but will need to finish an ongoing download operation before it can create a session using those options. 626 | * "`available`" means that the implementation supports the requested options without requiring any new downloads. 627 | 628 | An example usage is the following: 629 | 630 | ```js 631 | const options = { 632 | expectedInputs: [ 633 | { type: "text", languages: ["en", "es"] }, 634 | { type: "audio", languages: ["en", "es"] } 635 | ], 636 | temperature: 2 637 | }; 638 | 639 | const availability = await LanguageModel.availability(options); 640 | 641 | if (availability !== "unavailable") { 642 | if (availability !== "available") { 643 | console.log("Sit tight, we need to do some downloading..."); 644 | } 645 | 646 | const session = await LanguageModel.create(options); 647 | // ... Use session ... 648 | } else { 649 | // Either the API overall, or the expected languages and temperature setting, is not available. 650 | console.error("No language model for us :("); 651 | } 652 | ``` 653 | 654 | ### Download progress 655 | 656 | For cases where using the API is only possible after a download, you can monitor the download progress (e.g. in order to show your users a progress bar) using code such as the following: 657 | 658 | ```js 659 | const session = await LanguageModel.create({ 660 | monitor(m) { 661 | m.addEventListener("downloadprogress", e => { 662 | console.log(`Downloaded ${e.loaded * 100}%`); 663 | }); 664 | } 665 | }); 666 | ``` 667 | 668 | If the download fails, then `downloadprogress` events will stop being emitted, and the promise returned by `create()` will be rejected with a "`NetworkError`" `DOMException`. 669 | 670 | Note that in the case that multiple entities are downloaded (e.g., a base model plus [LoRA fine-tunings](https://arxiv.org/abs/2106.09685) for the `expectedInputs`) web developers do not get the ability to monitor the individual downloads. All of them are bundled into the overall `downloadprogress` events, and the `create()` promise is not fulfilled until all downloads and loads are successful. 671 | 672 | The event is a [`ProgressEvent`](https://developer.mozilla.org/en-US/docs/Web/API/ProgressEvent) whose `loaded` property is between 0 and 1, and whose `total` property is always 1. (The exact number of total or downloaded bytes are not exposed; see the discussion in [webmachinelearning/writing-assistance-apis issue #15](https://github.com/webmachinelearning/writing-assistance-apis/issues/15).) 673 | 674 | At least two events, with `e.loaded === 0` and `e.loaded === 1`, will always be fired. This is true even if creating the model doesn't require any downloading. 675 | 676 |
677 | What's up with this pattern? 678 | 679 | This pattern is a little involved. Several alternatives have been considered. However, asking around the web standards community it seemed like this one was best, as it allows using standard event handlers and `ProgressEvent`s, and also ensures that once the promise is settled, the session object is completely ready to use. 680 | 681 | It is also nicely future-extensible by adding more events and properties to the `m` object. 682 | 683 | Finally, note that there is a sort of precedent in the (never-shipped) [`FetchObserver` design](https://github.com/whatwg/fetch/issues/447#issuecomment-281731850). 684 |
685 | 686 | ## Detailed design 687 | 688 | ### Instruction-tuned versus base models 689 | 690 | We intend for this API to expose instruction-tuned models. Although we cannot mandate any particular level of quality or instruction-following capability, we think setting this base expectation can help ensure that what browsers ship is aligned with what web developers expect. 691 | 692 | To illustrate the difference and how it impacts web developer expectations: 693 | 694 | * In a base model, a prompt like "Write a poem about trees." might get completed with "... Write about the animal you would like to be. Write about a conflict between a brother and a sister." (etc.) It is directly completing plausible next tokens in the text sequence. 695 | * Whereas, in an instruction-tuned model, the model will generally _follow_ instructions like "Write a poem about trees.", and respond with a poem about trees. 696 | 697 | To ensure the API can be used by web developers across multiple implementations, all browsers should be sure their models behave like instruction-tuned models. 698 | 699 | ### Permissions policy, iframes, and workers 700 | 701 | By default, this API is only available to top-level `Window`s, and to their same-origin iframes. Access to the API can be delegated to cross-origin iframes using the [Permissions Policy](https://developer.mozilla.org/en-US/docs/Web/HTTP/Permissions_Policy) `allow=""` attribute: 702 | 703 | ```html 704 | 705 | ``` 706 | 707 | This API is currently not available in workers, due to the complexity of establishing a responsible document for each worker in order to check the permissions policy status. See [this discussion](https://github.com/webmachinelearning/translation-api/issues/18#issuecomment-2705630392) for more. It may be possible to loosen this restriction over time, if use cases arise. 708 | 709 | Note that although the API is not exposed to web platform workers, a browser could expose them to extension service workers, which are outside the scope of web platform specifications and have a different permissions model. 710 | 711 | ## Alternatives considered and under consideration 712 | 713 | ### How many stages to reach a response? 714 | 715 | To actually get a response back from the model given a prompt, the following possible stages are involved: 716 | 717 | 1. Download the model, if necessary. 718 | 2. Establish a session, including configuring per-session options and parameters. 719 | 3. Add an initial prompt to establish context. (This will not generate a response.) 720 | 4. Execute a prompt and receive a response. 721 | 722 | We've chosen to manifest these 3-4 stages into the API as two methods, `LanguageModel.create()` and `session.prompt()`/`session.promptStreaming()`, with some additional facilities for dealing with the fact that `LanguageModel.create()` can include a download step. Some APIs simplify this into a single method, and some split it up into three (usually not four). 723 | 724 | ### Stateless or session-based 725 | 726 | Our design here uses [sessions](#session-persistence-and-cloning). An alternate design, seen in some APIs, is to require the developer to feed in the entire conversation history to the model each time, keeping track of the results. 727 | 728 | This can be slightly more flexible; for example, it allows manually correcting the model's responses before feeding them back into the context window. 729 | 730 | However, our understanding is that the session-based model can be more efficiently implemented, at least for browsers with on-device models. (Implementing it for a cloud-based model would likely be more work.) And, developers can always achieve a stateless model by using a new session for each interaction. 731 | 732 | ## Privacy and security considerations 733 | 734 | Please see [the _Writing Assistance APIs_ specification](https://webmachinelearning.github.io/writing-assistance-apis/#privacy), where we have centralized the normative privacy and security considerations that apply to all APIs of this type. 735 | 736 | ## Stakeholder feedback 737 | 738 | * W3C TAG: [w3ctag/design-reviews#1093](https://github.com/w3ctag/design-reviews/issues/1093) 739 | * Browser engines and browsers: 740 | * Chromium: prototyping behind a flag 741 | * Mozilla: [mozilla/standards-positions#1213](https://github.com/mozilla/standards-positions/issues/1213) 742 | * WebKit: [WebKit/standards-positions#495](https://github.com/WebKit/standards-positions/issues/495) 743 | * Web developers: positive 744 | * See [issue #74](https://github.com/webmachinelearning/prompt-api/issues/74) for some developer feedback 745 | * Examples of organic enthusiasm: [X post](https://x.com/mortenjust/status/1805190952358650251), [blog post](https://tyingshoelaces.com/blog/chrome-ai-prompt-api), [blog post](https://labs.thinktecture.com/local-small-language-models-in-the-browser-a-first-glance-at-chromes-built-in-ai-and-prompt-api-with-gemini-nano/) 746 | * [Feedback from developer surveys](https://docs.google.com/presentation/d/1DhFC2oB4PRrchavxUY3h9U4w4hrX5DAc5LoMqhn5hnk/edit#slide=id.g349a9ada368_1_6327) 747 | -------------------------------------------------------------------------------- /index.bs: -------------------------------------------------------------------------------- 1 | 19 | 20 |

Introduction

21 | 22 | TODO 23 | 24 |

Dependencies

25 | 26 | This specification depends on the Infra Standard. [[!INFRA]] 27 | 28 | As with the rest of the web platform, human languages are identified in these APIs by BCP 47 language tags, such as "`ja`", "`en-US`", "`sr-Cyrl`", or "`de-CH-1901-x-phonebk-extended`". The specific algorithms used for validation, canonicalization, and language tag matching are those from the ECMAScript Internationalization API Specification, which in turn defers some of its processing to Unicode Locale Data Markup Language (LDML). [[BCP47]] [[!ECMA-402]] [[UTS35]]. 29 | 30 | These APIs are part of a family of APIs expected to be powered by machine learning models, which share common API surface idioms and specification patterns. Currently, the specification text for these shared parts lives in [[WRITING-ASSISTANCE-APIS#supporting]], and the common privacy and security considerations are discussed in [[WRITING-ASSISTANCE-APIS#privacy]] and [[WRITING-ASSISTANCE-APIS#security]]. Implementing these APIs requires implementing that shared infrastructure, and conforming to those privacy and security considerations. But it does not require implementing or exposing the actual writing assistance APIs. [[!WRITING-ASSISTANCE-APIS]] 31 | 32 |

The API

33 | 34 | 35 | [Exposed=Window, SecureContext] 36 | interface LanguageModel : EventTarget { 37 | static Promise<LanguageModel> create(optional LanguageModelCreateOptions options = {}); 38 | static Promise<Availability> availability(optional LanguageModelCreateCoreOptions options = {}); 39 | static Promise<LanguageModelParams?> params(); 40 | 41 | // These will throw "NotSupportedError" DOMExceptions if role = "system" 42 | Promise<DOMString> prompt( 43 | LanguageModelPrompt input, 44 | optional LanguageModelPromptOptions options = {} 45 | ); 46 | ReadableStream promptStreaming( 47 | LanguageModelPrompt input, 48 | optional LanguageModelPromptOptions options = {} 49 | ); 50 | Promise<undefined> append( 51 | LanguageModelPrompt input, 52 | optional LanguageModelAppendOptions options = {} 53 | ); 54 | 55 | Promise<double> measureInputUsage( 56 | LanguageModelPrompt input, 57 | optional LanguageModelPromptOptions options = {} 58 | ); 59 | readonly attribute double inputUsage; 60 | readonly attribute unrestricted double inputQuota; 61 | attribute EventHandler onquotaoverflow; 62 | 63 | readonly attribute unsigned long topK; 64 | readonly attribute float temperature; 65 | 66 | Promise<LanguageModel> clone(optional LanguageModelCloneOptions options = {}); 67 | undefined destroy(); 68 | }; 69 | 70 | [Exposed=Window, SecureContext] 71 | interface LanguageModelParams { 72 | readonly attribute unsigned long defaultTopK; 73 | readonly attribute unsigned long maxTopK; 74 | readonly attribute float defaultTemperature; 75 | readonly attribute float maxTemperature; 76 | }; 77 | 78 | dictionary LanguageModelCreateCoreOptions { 79 | // Note: these two have custom out-of-range handling behavior, not in the IDL layer. 80 | // They are unrestricted double so as to allow +Infinity without failing. 81 | unrestricted double topK; 82 | unrestricted double temperature; 83 | 84 | sequence<LanguageModelExpected> expectedInputs; 85 | sequence<LanguageModelExpected> expectedOutputs; 86 | }; 87 | 88 | dictionary LanguageModelCreateOptions : LanguageModelCreateCoreOptions { 89 | AbortSignal signal; 90 | CreateMonitorCallback monitor; 91 | 92 | sequence<LanguageModelMessage> initialPrompts; 93 | }; 94 | 95 | dictionary LanguageModelPromptOptions { 96 | object responseConstraint; 97 | AbortSignal signal; 98 | }; 99 | 100 | dictionary LanguageModelAppendOptions { 101 | AbortSignal signal; 102 | }; 103 | 104 | dictionary LanguageModelCloneOptions { 105 | AbortSignal signal; 106 | }; 107 | 108 | dictionary LanguageModelExpected { 109 | required LanguageModelMessageType type; 110 | sequence<DOMString> languages; 111 | }; 112 | 113 | // The argument to the prompt() method and others like it 114 | 115 | typedef ( 116 | sequence<LanguageModelMessage> 117 | // Shorthand for `[{ role: "user", content: [{ type: "text", value: providedValue }] }]` 118 | or DOMString 119 | ) LanguageModelPrompt; 120 | 121 | dictionary LanguageModelMessage { 122 | required LanguageModelMessageRole role; 123 | 124 | // The DOMString branch is shorthand for `[{ type: "text", value: providedValue }]` 125 | required (DOMString or sequence<LanguageModelMessageContent>) content; 126 | }; 127 | 128 | dictionary LanguageModelMessageContent { 129 | required LanguageModelMessageType type; 130 | required LanguageModelMessageValue value; 131 | }; 132 | 133 | enum LanguageModelMessageRole { "system", "user", "assistant" }; 134 | 135 | enum LanguageModelMessageType { "text", "image", "audio" }; 136 | 137 | typedef ( 138 | ImageBitmapSource 139 | or AudioBuffer 140 | or BufferSource 141 | or DOMString 142 | ) LanguageModelMessageValue; 143 | 144 | 145 |

Permissions policy integration

146 | 147 | Access to the prompt API is gated behind the [=policy-controlled feature=] "language-model", which has a [=policy-controlled feature/default allowlist=] of [=default allowlist/'self'=]. 148 | 149 |

Privacy considerations

150 | 151 | Please see [[WRITING-ASSISTANCE-APIS#privacy]] for a discussion of privacy considerations for the prompt API. That text was written to apply to all APIs sharing the same infrastructure, as noted in [[#dependencies]]. 152 | 153 |

Security considerations

154 | 155 | Please see [[WRITING-ASSISTANCE-APIS#security]] for a discussion of security considerations for the prompt API. That text was written to apply to all APIs sharing the same infrastructure, as noted in [[#dependencies]]. 156 | -------------------------------------------------------------------------------- /security-privacy-questionnaire.md: -------------------------------------------------------------------------------- 1 | # [Self-Review Questionnaire: Security and Privacy](https://w3ctag.github.io/security-questionnaire/) 2 | 3 | > 01. What information does this feature expose, 4 | > and for what purposes? 5 | 6 | This feature exposes two large categories of information: 7 | 8 | - The implicit behavior of the underlying language model, in terms of what responses it provides to given inputs. 9 | 10 | - The availability information for various capabilities of the API, so that web developers know what capabilities are available in the current browser, and whether using them will require a download or the capability can be used readily. 11 | 12 | The privacy implications of both of these are discussed, in general terms, [in the _Writing Assistance APIs_ specification](https://webmachinelearning.github.io/writing-assistance-apis/#privacy), which was written to cover all APIs with similar concerns. 13 | 14 | > 02. Do features in your specification expose the minimum amount of information 15 | > necessary to implement the intended functionality? 16 | 17 | We believe so. It's possible that we could remove the exposure of the download status information. However, it would almost certainly be inferrable via timing side-channels. (I.e., if downloading a language model or fine-tuning is required, then the web developer can observe the creation of the `LanguageModel` object taking longer.) 18 | 19 | > 03. Do the features in your specification expose personal information, 20 | > personally-identifiable information (PII), or information derived from 21 | > either? 22 | 23 | No. Although it's imaginable that the backing language model could be fine-tuned on PII to give more accurate-to-this-user outputs, we intend to disallow this in the specification. 24 | 25 | > 04. How do the features in your specification deal with sensitive information? 26 | 27 | We do not deal with sensitive information. 28 | 29 | > 05. Does data exposed by your specification carry related but distinct 30 | > information that may not be obvious to users? 31 | 32 | It is possible that the multimodal inputs support for this API could pass along metadata, such as image metadata, to the underlying language model. There are cases where this can be useful, so we're unsure whether requiring that the implementation strip such information is the right path. 33 | 34 | > 06. Do the features in your specification introduce state 35 | > that persists across browsing sessions? 36 | 37 | Yes. The downloading of language models, and any collateral necessary to support various options, persists across browsing sessions. 38 | 39 | > 07. Do the features in your specification expose information about the 40 | > underlying platform to origins? 41 | 42 | Possibly. If a browser does not bundle its own models, but instead uses the operating system's functionality, it is possible for a web developer to infer information about such operating system functionality. 43 | 44 | > 08. Does this specification allow an origin to send data to the underlying 45 | > platform? 46 | 47 | Possibly. Again, in the scenario where the model comes from the operating system, such data would pass through OS libraries. 48 | 49 | > 09. Do features in this specification enable access to device sensors? 50 | 51 | No. 52 | 53 | > 10. Do features in this specification enable new script execution/loading 54 | > mechanisms? 55 | 56 | No. 57 | 58 | > 11. Do features in this specification allow an origin to access other devices? 59 | 60 | No. 61 | 62 | > 12. Do features in this specification allow an origin some measure of control over 63 | > a user agent's native UI? 64 | 65 | No. 66 | 67 | > 13. What temporary identifiers do the features in this specification create or 68 | > expose to the web? 69 | 70 | None. 71 | 72 | > 14. How does this specification distinguish between behavior in first-party and 73 | > third-party contexts? 74 | 75 | We intend to use permissions policy to disallow the usage of these features by default in third-party (cross-origin) contexts. However, the top-level site can delegate to cross-origin iframes. 76 | 77 | Otherwise, some of the possible [anti-fingerprinting mitigations](https://webmachinelearning.github.io/writing-assistance-apis/#privacy-availability) involve partitioning information across sites, which is kind of like distinguishing between first- and third-party contexts. 78 | 79 | > 15. How do the features in this specification work in the context of a browser’s 80 | > Private Browsing or Incognito mode? 81 | 82 | One possible area of discussion here is whether backing these APIs with cloud-based models make sense in such modes, or whether they should be disabled. 83 | 84 | Otherwise, we do not anticipate any differences. 85 | 86 | > 16. Does this specification have both "Security Considerations" and "Privacy 87 | > Considerations" sections? 88 | 89 | We don't yet have a specification, but when we do, we anticipate it delegating to the corresponding sections in _Writing Assistance APIs_: 90 | 91 | * [Privacy considerations](https://webmachinelearning.github.io/writing-assistance-apis/#privacy) 92 | * [Security considerations](https://webmachinelearning.github.io/writing-assistance-apis/#security) 93 | 94 | > 17. Do features in your specification enable origins to downgrade default 95 | > security protections? 96 | 97 | No. 98 | 99 | > 18. What happens when a document that uses your feature is kept alive in BFCache 100 | > (instead of getting destroyed) after navigation, and potentially gets reused 101 | > on future navigations back to the document? 102 | 103 | Ideally, nothing special should happen. In particular, `LanguageModel` objects should still be usable without interruption after navigating back. We'll need to add web platform tests to confirm this, as it's easy to imagine implementation architectures in which keeping these objects alive while the `Document` is in the back/forward cache is difficult. 104 | 105 | (For such implementations, failing to bfcache `Document`s with active `LanguageModel` objects would a simple way of being spec-compliant.) 106 | 107 | > 19. What happens when a document that uses your feature gets disconnected? 108 | 109 | The methods of the `LanguageModel` objects will start rejecting with `"InvalidStateError"` `DOMException`s. 110 | 111 | > 20. Does your spec define when and how new kinds of errors should be raised? 112 | 113 | We do not yet have a specification, but we anticipate that we will follow the model in the [_Writing Assistance APIs_](https://webmachinelearning.github.io/writing-assistance-apis/). This includes some well-specified and also implementation-defined errors that reveal the limitations of the model. The user agent can instead use an `"UnknownError"` `DOMException` if necessary to protect privacy or security. 114 | 115 | > 21. Does your feature allow sites to learn about the user's use of assistive technology? 116 | 117 | No. 118 | 119 | > 22. What should this questionnaire have asked? 120 | 121 | Seems fine. 122 | -------------------------------------------------------------------------------- /w3c.json: -------------------------------------------------------------------------------- 1 | { 2 | "group": [110166] 3 | , "contacts": ["cwilso"] 4 | , "repo-type": "cg-report" 5 | } 6 | --------------------------------------------------------------------------------