├── MTL_BG_Blur_HP_best_ower_efficiency.png ├── explainer.md ├── ptat-bg5fps-rel.png └── security-privacy-self-assessment.md /MTL_BG_Blur_HP_best_ower_efficiency.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riju/backgroundBlur/9f93410ae2de8a72fca98f6d6dd7a72fb2ed5ce5/MTL_BG_Blur_HP_best_ower_efficiency.png -------------------------------------------------------------------------------- /explainer.md: -------------------------------------------------------------------------------- 1 | # Background Blur & Mask 2 | 3 | ## Authors: 4 | 5 | - Rijubrata Bhaumik, Intel Corporation 6 | - Eero Häkkinen, Intel Corporation 7 | - Youenn Fablet, Apple Inc. 8 | 9 | ## Participate 10 | - github.com/riju/backgroundBlur/issues/ 11 | 12 | ## Introduction 13 | 14 | Background Blur has become one of the most used features on Video conferencing web apps like [Teams](https://www.microsoft.com/en-us/microsoft-teams/virtual-meeting-backgrounds), [Meet](https://workspaceupdates.googleblog.com/2020/09/blur-your-background-in-google-meet.html), [Zoom](https://support.zoom.us/hc/en-us/articles/360061468611-Using-blurred-background-), [Webex](https://help.webex.com/en-us/article/0p4gb1/Webex-App-%7C-Use-a-virtual-or-blurred-background-in-calls-and-meetings), etc. We want to achieve the goal of giving all the Web apps similar levers as their native counterparts, leveraging the same platform APIs and to delight users without completely relying on ML frameworks like TensorFlow.js, [Mediapipe](https://ai.googleblog.com/2020/10/background-features-in-google-meet.html), etc, or cloud based solutions. 15 | 16 | ## Use Cases 17 | 18 | A vast majority of communication these days happens on our client devices. During video meetings, participants are usually aware of how they look and what their environment (usually their home) is revealing to the audience. Most folks, especially ones without a dedicated office space would be inclined to hide messy rooms with pets and kids. Video meetings like face to face meetings are important for non-verbal communication but participants would rather focus on the important subject by removing the distractions in the background and prevent any accidental snafus. [Microsoft says](https://www.microsoft.com/en-ww/microsoft-365/business-insights-ideas/resources/how-custom-backgrounds-keep-the-focus-on-you) in a 38 minute conference call, 13 minutes are wasted dealing with distractions and interruptions. Background Blur goes a long way to cutting down those disruptions. [Zoom says](https://support.zoom.us/hc/en-us/articles/360061468611-Using-blurred-background)- "_When a custom virtual background is unavailable or not suiting your needs, but you still want to maintain some privacy with regards to your surroundings, the blur background option can be a great alternative. This option simply blurs the background of your video, obscuring exactly who or what is behind you. It's great for hiding a cluttered dorm room, taking a meeting in a coffee shop, or just keeping things professional._" . In fact, NCSC (National Cyber Security Centre UK) [suggests using background Blur or a background image](https://www.ncsc.gov.uk/guidance/video-conferencing-services-security-guidance-organisations) for staff meetings to add a degree of personal privacy. 19 | 20 | On the Web, due to a lack of a standardized JS API for Background Blur and widespread demand, developers have no options but to use ML frameworks like Tensorflow.js and other WASM libraries to satify their customers. This Background Blur API gives developers a **choice** to use the native platform's API. This would ensure conformance to the corresponding native apps. 21 | 22 | 23 | ## Goals 24 | 25 | * Ideally, Background Blur API should work with APIs like [Face Detection](https://github.com/riju/faceDetection/blob/main/explainer.md#goals). Face detection would help to either provide a [mask (countour) or at least the bounding box co-ordinates](https://github.com/riju/faceDetection/blob/main/explainer.md#face-detection-api) which might help Background Blur to compute faster. In many cases platforms will do an in-stream correction and explicit face detection might not be needed at the browser level. In that case the detectedFaces options can be nullable. 26 | 27 | * Background Blur API should have options similar to what consumers demand, things possible on ML frameworks but not yet exposed by present day Platform APIs like Blur Level. Users might want to use blur intensity based on who they are communicating with. 28 | 29 | * Background Blur API should work with [Workers](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers). 30 | 31 | ## Non-goals 32 | 33 | One adjacent goal was to combine Background Replacement with Background Blur as part of an overall Background Concealment API. Presently there's no platform APIs to support Background Replacement (green screen, animated gif, image, video). Combining both might be too premature. Instead, we would like to keep Background Replacement as a separate feature proposal for a later date. 34 | 35 | **Update:** Platforms have added APIs for **Background Segmentation MASK**, where we can get access to background mask metadata. Using this information, developers can create features like Green Screen and Transparent Background or other Background Replacement techniques. 36 | 37 | ## Performance 38 | 39 | Background blur, using the proposed Javascript API, was compared to several other alternatives in power usage. 40 | 41 | ![Package Power Consumption](MTL_BG_Blur_HP_best_ower_efficiency.png) 42 | 43 | * _Tool_ : Intel Power and Therman Analysis Tool (PTAT). 44 | * _Date of testing_ : May 2024 45 | * _System configuration_ : HP Spectre x360 Intel Core Ultra 7 155H 46 | * _Relevant testing/workload setup_ : 47 | 640x480 pixel frame resolution 48 | Windows > Settings > System > Power & battery > Power mode 49 | The tests were repeated at “Best power efficiency” power profile. 50 | 51 | [TF.js](https://storage.googleapis.com/tfjs-models/demos/body-pix/index.html) 52 | 53 | - Architecture: MobileNetV1 and Resnet50 54 | - Estimate: segmentation 55 | - Effect: bokeh 56 | - TF.js MobileNetV1 : FPS ~ 15 57 | - TF.js ResNet50: FPS ~ 10 58 | 59 | [Mediapipe](https://storage.googleapis.com/tfjs-models/demos/segmentation/index.html?model=selfie_segmentation) 60 | 61 | - Model: MediaPipeSelfieSegmentation 62 | - Visualization: bokehEffect 63 | - Runtime-backend: mediapipe-gpu 64 | 65 | Please check [Disclaimer](#disclaimer). 66 | 67 | ## User research 68 | 69 | * TEAMS : Supportive. Let’s start with a MVP and add Background Replacement feature later. 70 | 71 | *Agreed, [Background Replacement is now stated as Future Work](https://github.com/riju/backgroundBlur/blob/main/explainer.md#non-goals). 72 | Some of the new features on Windows have [requirements](https://docs.microsoft.com/en-us/windows-hardware/drivers/stream/ksproperty-cameracontrol-extended-backgroundsegmentation#requirements) which might not be available to a lot of users right away, so we work on a minimal set which works on a broader set of clients and add newer features later.* 73 | 74 | 75 | * MEET : Supportive, but Segmentation Mask is a basic building block in MediaPipe. 76 | 77 | *We discussed that MASK would be more important in Background Replacement scenarios. Since no platform APIs support Background Replacement right now, we decided to move it to [future work](https://github.com/riju/backgroundBlur/blob/main/explainer.md#non-goals).* 78 | 79 | * Zoom : 80 | 81 | ## Background Blur API 82 | 83 | This is about bringing platform background concealment capabilities to the web so constrainable media track capabilities fit naturally to the purpose. Because the concealment is (or should be) implemented by the platform media pipeline, it is enough for a web application to control the concealment through constrainable properties. The application does not have to (and actually cannot) do the actual concealment in this case. The concealment is already done before the application receives video frames. On Apple devices, Background Blur is controlled globally from Command Center and as such any app cannot independently switch on/off the feature. 84 | 85 | ```js 86 | partial dictionary MediaTrackSupportedConstraints { 87 | boolean backgroundBlur = true; 88 | }; 89 | 90 | partial dictionary MediaTrackCapabilities { 91 | sequence backgroundBlur; 92 | }; 93 | 94 | partial dictionary MediaTrackConstraintSet { 95 | ConstrainBoolean backgroundBlur; 96 | }; 97 | 98 | partial dictionary MediaTrackSettings { 99 | boolean backgroundBlur; 100 | }; 101 | ``` 102 | 103 | [PR](https://github.com/w3c/mediacapture-extensions/pull/61) 104 | 105 | If a need for a more fine grained backgroud blur levels arises, the boolean _backgroundBlur_ constrainable property could be replaced with a DOMString _backgroundBlurMode_ constrainable property which could support values like `"off"`, `"light"` and `"heavy"`, for instance. 106 | 107 | ## Background Segmentation Mask API 108 | 109 | This is about bringing platform background segmentation capabilities to the web so constrainable media track capabilities fit naturally to the purpose. Because the segmentation is (or should be) implemented by the platform media pipeline, it is enough for a web application to control the segmentation through constrainable properties. In this case, the application must do the actual concealment by itself based on normal and background mask video frames which can be classified as such using video frame metadata. 110 | 111 | ```js 112 | partial dictionary MediaTrackSupportedConstraints { 113 | boolean backgroundSegmentationMask = true; 114 | }; 115 | 116 | partial dictionary MediaTrackCapabilities { 117 | sequence backgroundSegmentationMask; 118 | }; 119 | 120 | partial dictionary MediaTrackConstraintSet { 121 | ConstrainBoolean backgroundSegmentationMask; 122 | }; 123 | 124 | partial dictionary MediaTrackSettings { 125 | boolean backgroundSegmentationMask; 126 | }; 127 | 128 | partial dictionary VideoFrameCallbackMetadata 129 | boolean backgroundSegmentationMask; 130 | }; 131 | 132 | partial dictionary VideoFrameMetadata { 133 | ImageBitmap backgroundSegmentationMask; 134 | }; 135 | ``` 136 | ## Blur vs Mask 137 | In terms of API shape, both the BG Blur and the BG MASK have similar (but separate) Media Stream Track Capabilities, Constraints and Settings. There is some implementation difference - 138 | 139 | * BG Blur preprocesses video frames and replaces the original video frames with ones with background blurred thus the original video frames become unavailable from a web application point of view (until BG Blur is disabled) 140 | 141 | * BG MASK retains the original frames intact, does segmentation and provides mask frames in addition to the original video frames thus web applications receive both the original frames and mask frames in the same video frame stream (effectively doubling the frame rate) 142 | 143 | In BG MASK, web applications must be able to separate the original video frames from mask frames and for that, we are adding a boolean to VideoFrameMetadata. 144 | 145 | As an implementation detail, masks get interleaved in the stream, the order is first masked frame, then original frame. Could the actual frame be provided with metadata instead of providing it as a different frame. 146 | 147 | TODO(eero) : Explore alpha channel. Webcodecs does not support alpha chanel today. 148 | 149 | ## Mask Data -- VideoFrame / ImageBitmap / ImageData 150 | How to present the Mask data ? 151 | * **VideoFrame** : VideoFrame for WebCodec and HTMLVideoElement are the only elements supported by WebGPU zero copy path via [importExternalTexture](https://www.w3.org/TR/webgpu/#dom-gpudevice-importexternaltexture). [WebCodecs integration](https://developer.chrome.com/blog/new-in-webgpu-116#webcodecs_integration) adds support for using a ```VideoFrame``` as the source for a [GPUExternalTexture](https://developer.mozilla.org/en-US/docs/Web/API/GPUExternalTexture) and a [copyExternalImageToTexture()](https://developer.mozilla.org/en-US/docs/Web/API/GPUQueue/copyExternalImageToTexture) call. 152 | - Maybe not for majority users, but in some cases, with high resolution cameras, it's possible that the VideoFrame can be 4K and BG Segementation/Mask can be 4K too. Other than that, CPU residency is good enough. 153 | 154 | [Elad](https://github.com/w3c/mediacapture-extensions/pull/142#discussion_r1600063736), [Jan-Ivar](https://github.com/w3c/mediacapture-extensions/pull/142#discussion_r1628541641) and [Eugene](https://www.w3.org/2024/06/18-mediawg-minutes.html#t04) think VideoFrame is not needed for this use case. 155 | 156 | 157 | * **ImageData** or **ImageBitmap** has the same CPU back resource support. 158 | - ImageData has a data readonly attribute which is better than ImageBitmap. ImageBitmap always relies on others to readback contents. 159 | - ImageData requires data to be in RGBA format in an `Uint8ClampedArray` which always requires a conversion from a grayscale source. 160 | - 2D canvas also exposes a API named [putImageData()](https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/putImageData) to upload contents without conversions(e.g. alpha related ). ImageBitmap could only rely on the draw call and some conversions might happens (e.g. alpha) 161 | - ImageBitmap has flexible support also for formats other than RGBA and also for GPU texture back resources. 162 | 163 | Thanks @shaoboyan @eladalon1983 @Djuffin 164 | 165 | ## Exposing change of MediaStreamTrack configuration 166 | 167 | The configuration (capabilities, constraints or settings) of a MediaStreamTrack may be changed dynamically outside the control of web applications. 168 | One example is when a user decides to switch on background blur through the operating system. 169 | Web applications might want to know that the configuration of a particular MediaStreamTrack has changed. 170 | For that purpose, a new event is defined below. 171 | 172 | ```js 173 | partial interface MediaStreamTrack { 174 | attribute EventHandler onconfigurationchange; 175 | }; 176 | ``` 177 | 178 | [PR](https://github.com/w3c/mediacapture-extensions/pull/61) 179 | 180 | ## Example 181 | 182 | * main.js: 183 | ```js 184 | // main.js: 185 | // Open camera. 186 | const stream = await navigator.mediaDevices.getUserMedia({video: true}); 187 | const [videoTrack] = stream.getVideoTracks(); 188 | 189 | // Use a video worker and show to user. 190 | const videoElement = document.querySelector('video'); 191 | const videoWorker = new Worker('video-worker.js'); 192 | videoWorker.postMessage({videoTrack}, [videoTrack]); 193 | const {data} = await new Promise(r => videoWorker.onmessage); 194 | videoElement.srcObject = new MediaStream([data.videoTrack]); 195 | ``` 196 | * video-worker.js: 197 | ```js 198 | self.onmessage = async ({data: {videoTrack}}) => { 199 | const processor = new MediaStreamTrackProcessor({track: videoTrack}); 200 | let readable = processor.readable; 201 | 202 | const capabilities = videoTrack.getCapabilities(); 203 | if ((capabilities.backgroundBlur || []).includes(true)) { 204 | // The platform supports background blurring. 205 | // Let's use platform background blurring. 206 | // No transformers are needed. 207 | await track.applyConstraints({ 208 | backgroundBlur: {exact: true} 209 | }); 210 | // Pass the same video track back to the main. 211 | parent.postMessage({videoTrack}, [videoTrack]); 212 | } else { 213 | // The platform does not support background blurring or 214 | // does not allow it to be enabled. 215 | // Let's use custom face detection to aid custom background blurring. 216 | importScripts('custom-face-detection.js', 'custom-background-blur.js'); 217 | if ((capabilities.backgroundSegmentationMask || []).includes(true)) { 218 | // The platform supports background segmentation mask. 219 | // Let's use it to aid face detection. 220 | await track.applyConstraints({ 221 | backgroundSegmentationMask: {exact: true} 222 | }); 223 | } 224 | const transformer = new TransformStream({ 225 | async transform(frame, controller) { 226 | // Use a custom face detection. 227 | const detectedFaces = await detectFaces( 228 | frame, 229 | frame.metadata().backgroundSegmentationMask); 230 | // Use a custom background blurring. 231 | const newFrame = await blurBackground(frame, detectedFaces); 232 | frame.close(); 233 | controller.enqueue(newFrame); 234 | } 235 | }); 236 | // Transformer streams are needed. 237 | // Use a generator to generate a new video track and pass it to the main. 238 | const generator = new VideoTrackGenerator(); 239 | parent.postMessage({videoTrack: generator.track}, [generator.track]); 240 | // Pipe through a custom transformer. 241 | await readable.pipeThrough(transformer).pipeTo(generator.writable); 242 | } 243 | }; 244 | ``` 245 | 246 | ## Background Segmentation Mask Example: Green Background 247 | ```js 248 | window.addEventListener('DOMContentLoaded', async event => { 249 | const stream = await navigator.mediaDevices.getUserMedia({video: true}); 250 | const [videoTrack] = stream.getVideoTracks(); 251 | const videoCapabilities = videoTrack.getCapabilities(); 252 | if ((videoCapabilities.backgroundSegmentationMask || []).includes(true)) { 253 | await videoTrack.applyConstraints({ 254 | backgroundSegmentationMask: {exact: true} 255 | }); 256 | } else { 257 | // No background segmentation mask support. Do something else. 258 | } 259 | const videoSettings = videoTrack.getSettings(); 260 | const videoProcessor = new MediaStreamTrackProcessor({track: videoTrack}); 261 | const videoGenerator = new MediaStreamTrackGenerator({kind: 'video'}); 262 | const videoElement = document.querySelector('video'); 263 | videoElement.srcObject = new MediaStream([videoGenerator]); 264 | await videoProcessor.readable 265 | .pipeThrough(new TransformStream({ 266 | start(controller) { 267 | console.log('start'); 268 | this.height = videoSettings.height; 269 | this.width = videoSettings.width; 270 | this.backgroundCanvas = new OffscreenCanvas(this.width, this.height); 271 | this.canvas = new OffscreenCanvas(this.width, this.height); 272 | }, 273 | transform(videoFrame, controller) { 274 | const backgroundSegmentationMask = 275 | videoFrame.metadata().backgroundSegmentationMask; 276 | 277 | if (backgroundSegmentationMask) { 278 | const backgroundContext = this.backgroundCanvas.getContext('2d'); 279 | const context = this.canvas.getContext('2d'); 280 | 281 | // Draw a green background and a black foreground: 282 | // Invert and draw the mask frame. 283 | backgroundContext.globalCompositeOperation = 'copy'; 284 | backgroundContext.fillStyle = 'white'; 285 | backgroundContext.fillRect(0, 0, this.width, this.height); 286 | backgroundContext.globalCompositeOperation = 'difference'; 287 | backgroundContext.drawImage(backgroundSegmentationMask, 0, 0); 288 | // Draw the background color. 289 | backgroundContext.globalCompositeOperation = 'multiply'; 290 | backgroundContext.fillStyle = 'lime'; 291 | backgroundContext.fillRect(0, 0, this.width, this.height); 292 | 293 | // Draw the foreground and a black background: 294 | // Draw the mask frame. 295 | context.globalCompositeOperation = 'copy'; 296 | context.drawImage(backgroundSegmentationMask, 0, 0); 297 | // Draw the foreground from the video frame. 298 | context.globalCompositeOperation = 'multiply'; 299 | context.drawImage(videoFrame, 0, 0); 300 | 301 | // Combine the foreground and the green background. 302 | context.globalCompositeOperation = 'lighter'; 303 | context.drawImage(this.backgroundCanvas, 0, 0); 304 | } else { 305 | // Draw green. 306 | const context = this.canvas.getContext('2d'); 307 | context.globalCompositeOperation = 'copy'; 308 | context.fillStyle = 'lime'; 309 | context.fillRect(0, 0, this.width, this.height); 310 | } 311 | 312 | // Create and enqueue a new video frame. 313 | const {timestamp} = videoFrame; 314 | videoFrame.close(); 315 | controller.enqueue(new VideoFrame(this.canvas, {timestamp})); 316 | } 317 | })) 318 | .pipeTo(videoGenerator.writable); 319 | }); 320 | ``` 321 | 322 | ## Demo 323 | https://github.com/riju/backgroundBlur/assets/975872/27b320aa-ee66-48ee-968f-3f0a11c75d38 324 | 325 | [Demo Page](https://github.eero.xn--hkkinen-5wa.fi/web-platform-demos/mediacapture/background-segmentation-mask/) by [@eehakkin](https://github.com/eehakkin) 326 | Inspired by [webrtcHacks](https://github.com/webrtcHacks/transparent-virtual-background/tree/master), thanks [@fippo](https://github.com/fippo) and [@chadwallacehart](https://github.com/chadwallacehart) 327 | 328 | #### Implementation Detail : 329 | The demo uses transform streams to preprocess frames before displaying them on a video element. The demo always keeps the last mask frame but does not pass them to the video element (except for demo purposes). Whenever the demo receives an original video frame, the demo uses that frame and the last mask frame to create a new video frame with foreground colored with a foreground overlay color (blue by default) and enqueues that to be passed to the video element instead of the original (and the mask) frames. 330 | 331 | ## Security considerations 332 | 333 | Background Blur feature does not expose any more security concerns compared to a video call without it. Since there's demand for Background Blur many products use a cloud based solution to satisfy conformance across a myriad of client devices. Modern clients are quite efficient these days to handle such popular tasks like Background Blur, either by leveraging AI accelerators or using specific vector instructions like AVX. 334 | 335 | The support for background blur is a track invariant. Web applications cannot affect it. Tracks which originates from `MediaDevices.getUserMedia()` may support background blur if the selected camera and the platform support it. Tracks which originates from other sources do not support background blur. 336 | 337 | The background blur capability (a boolean sequence describing whether it is possible to enable and/or to disable the background blur) is provided by the User-Agent and cannot be modified by web applications. It may however change for instance if the user uses operating system controls to enforce background blur or to remove such an enforcement. 338 | 339 | The background blur constraints are provided by web applications by passing them to `navigator.mediaDevices.getUserMedia()` or to `MediaStreamTrack.applyConstraints()`. Constraints allow web applications to change settings within the bounds of capabilities. 340 | 341 | The current background blur setting (a boolean describing whether background blur is currently in effect) is provided by the User-Agent. It may change if the user uses operating system controls to disable or to enable background blur but also if web applications uses `MediaStreamTrack.applyConstraints()` to select a supported background blur setting other than the current one. If the background blur capability does not include multiple supported boolean settings, web applications cannot change the background blur setting. 342 | 343 | ## Privacy considerations 344 | 345 | Background Blur is supposed to enhance Privacy of the user compared to a video call without it. Many times users are in a video call where they do not know the audience well enough. It's advisable to not accidentally share any more personal information than required which might be otherwise be exposed without any form of Background Concealment. When in doubt, it's better to blur it out and users would change the blur intensity depending on who is on the other side of the call. 346 | 347 | ### Fingerprinting 348 | 349 | If a site does not have [permissions](https://w3c.github.io/permissions/), background blur provides practically no fingerprinting posibilities. 350 | The only provided information is `navigator.mediaDevices.getSupportedConstraints().backgroundBlur` which either is true (if the User-Agent supports background blur in general irrespective of whether the device has a camera or a platform version which support background blur) or does not exist. 351 | That same information can most probably obtained also by other means like from the User-Agent string. 352 | 353 | If a site utilizes `navigator.mediaDevices.getUserMedia({video: {}})` which resolves only after the user has [granted](https://w3c.github.io/permissions/#dfn-granted) the ["camera"](https://www.w3.org/TR/mediacapture-streams/#dfn-camera) permission, the returned video tracks may have `backgroundBlur` capabilities and settings. 354 | The `backgroundBlur` capability can be either non-existing, `[false]`, `[true]` or `[false, true]` and the setting can be either non-existing, false or true. 355 | Based on the capability, it is possible to determine if the platform is one which allows application only to observe background blur setting changes or one which allows applications also to set the background blur setting. 356 | In essence, this splits operating systems to two groups but does not differentiate between platform versions. 357 | 358 | All the frames for which background is blurred originate from cameras. 359 | No methods are provided for sites to insert frames for background blurring. 360 | As such, sites cannot fingerprint the background blur implementation as the sites have no access to original frames and have access to blurred frames only if the user has the user has [granted](https://w3c.github.io/permissions/#dfn-granted) the ["camera"](https://www.w3.org/TR/mediacapture-streams/#dfn-camera) permission. 361 | 362 | ## Stakeholder Feedback / Opposition 363 | 364 | [Implementors and other stakeholders may already have publicly stated positions on this work. If you can, list them here with links to evidence as appropriate.] 365 | 366 | - [Safari] : [Positive](https://lists.webkit.org/pipermail/webkit-dev/2022-July/032321.html) Youenn (Apple) is co-author. 367 | - [Firefox] : [Positive](https://github.com/mozilla/standards-positions/issues/658#issuecomment-1477070865). 368 | 369 | [If appropriate, explain the reasons given by other implementors for their concerns.] 370 | 371 | ## References & acknowledgements 372 | 373 | Many thanks for valuable feedback and advice from: 374 | 375 | - [Tuukka Toivonen] 376 | - [Bernard Aboba] 377 | - [Harald Alvestrand] 378 | - [Jan-Ivar Bruaroey] 379 | - [Youenn Fablet] 380 | - [Dominique Hazael-Massieux] 381 | - [François Beaufort] 382 | - [Elad Alon] 383 | - [Shaobo Yan] 384 | 385 | 386 | ## Disclaimer 387 | 388 | Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right. 389 | 390 | Performance varies by use, configuration and other factors. Learn more on the [Performance Index site](https://edc.intel.com/content/www/us/en/products/performance/benchmarks/overview/). 391 | 392 | Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. 393 | 394 | Your costs and results may vary. 395 | 396 | Intel technologies may require enabled hardware, software or service activation. 397 | 398 | © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others. 399 | -------------------------------------------------------------------------------- /ptat-bg5fps-rel.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/riju/backgroundBlur/9f93410ae2de8a72fca98f6d6dd7a72fb2ed5ce5/ptat-bg5fps-rel.png -------------------------------------------------------------------------------- /security-privacy-self-assessment.md: -------------------------------------------------------------------------------- 1 | # Security and Privacy Questionnaire 2 | 3 | [Self-Review Questionnaire: Security and Privacy](https://w3ctag.github.io/security-questionnaire/) 4 | 5 | 6 | > 01. What information does this feature expose, and for what purposes? 7 | 8 | Background Blur is supposed to enhance Privacy of the user compared to a video call without it. Many times users are in a video call where they do not know the audience well enough. It's advisable to not accidentally share any more personal information than required which might be otherwise be exposed without any form of Background Concealment. When in doubt, it's better to blur it out and users would change the blur intensity depending on who is on the other side of the call. 9 | 10 | > 02. Do features in your specification expose the minimum amount of information 11 | > necessary to implement the intended functionality? 12 | 13 | Yes. This specification does expose a background-blur as a capability on a video stream usually from a webcam as constrainable property. 14 | 15 | > 03. Do the features in your specification expose personal information, 16 | > personally-identifiable information (PII), or information derived from 17 | > either? 18 | 19 | The information exposed by this API is not personal information or personally-identifiable information. 20 | 21 | > 04. How do the features in your specification deal with sensitive information? 22 | 23 | This specification does not share additional sensitive information such as camera serial numbers or such to the Web. 24 | 25 | That being said, this specification extends the [Media Capture and Streams](https://w3c.github.io/mediacapture-main/) specification which itself exposes video and audio streams from the user or from the user environment to Web sites with proper permissions. 26 | 27 | > 05. Do the features in your specification introduce state 28 | > that persists across browsing sessions? 29 | 30 | No. 31 | 32 | > 06. Do the features in your specification expose information about the 33 | > underlying platform to origins? 34 | 35 | No particular additional information is exposed about the camera hardware, like brand, etc, however we do expose if the platform has support for the constainable properties. 36 | 37 | > 07. Does this specification allow an origin to send data to the underlying 38 | > platform? 39 | 40 | This specification does not allow an origin access to any new sensors on a user’s device. 41 | 42 | That being said, this specification extends the [Media Capture and Streams](https://w3c.github.io/mediacapture-main/) specification which itself allows an origin access to internal and external cameras and microphones on a user’s device per permission requests. 43 | 44 | > 08. Do features in this specification enable access to device sensors? 45 | 46 | See [01]. Also note that this specification extends the [Media Capture and Streams](https://w3c.github.io/mediacapture-main/) specification which exposes video and audio streams from the user or from the user environment to Web sites per permission requests. 47 | 48 | > 09. Do features in this specification enable new script execution/loading 49 | > mechanisms? 50 | 51 | No. 52 | 53 | > 10. Do features in this specification allow an origin to access other devices? 54 | 55 | No. See [07]. 56 | 57 | > 11. Do features in this specification allow an origin some measure of control over 58 | > a user agent's native UI? 59 | 60 | No. 61 | 62 | > 12. What temporary identifiers do the features in this specification create or 63 | > expose to the web? 64 | 65 | [GETUSERMEDIA](https://www.w3.org/TR/mediacapture-streams/) might expose deviceID. This specification does not expose anything extra. 66 | 67 | > 13. How does this specification distinguish between behavior in first-party and 68 | > third-party contexts? 69 | 70 | Background Blur as an extension of GetUserMedia, is exposed only to a top-level browsing secure context. 71 | 72 | > 14. How do the features in this specification work in the context of a browser’s 73 | > Private Browsing or Incognito mode? 74 | 75 | It will work the same way, however, as soon as that session ends, the permission status will be cleaned. 76 | 77 | > 15. Does this specification have both "Security Considerations" and "Privacy 78 | > Considerations" sections? 79 | 80 | The [Mediacapture-Extensions spec](https://w3c.github.io/mediacapture-extensions/) does not yet have a Privacy and Security section. However, this is similar to ImageCapture specification, See the [Security and 81 | Privacy](https://w3c.github.io/mediacapture-image/#securityandprivacy) section. Background Blur and their privacy and security analysis is covered in a separate [explainer](https://github.com/riju/backgroundBlur/blob/main/explainer.md#security-considerations). This should be an addendum to the main specification's [Privacy section](https://w3c.github.io/mediacapture-main/#privacy-and-security-considerations). 82 | 83 | > 16. Do features in your specification enable origins to downgrade default 84 | > security protections? 85 | 86 | By default, this specification uses the same permissions namely `camera` as the [mediacapture-main](https://w3c.github.io/mediacapture-main/) specification. 87 | 88 | > 17. What happens when a document that uses your feature is kept alive in BFCache 89 | > (instead of getting destroyed) after navigation, and potentially gets reused 90 | > on future navigations back to the document? 91 | 92 | > 18. What happens when a document that uses your feature gets disconnected? 93 | 94 | > 19. What should this questionnaire have asked? 95 | --------------------------------------------------------------------------------