├── text ├── 0048-updater-branches │ └── settings.png ├── 0049-multiple-video-mixes │ ├── output-simple.png │ └── output-advanced.png ├── 0033-replay-buffer-save-frontend-event.md ├── 0000-fragmented-recording.md ├── 0026-hls-ingestion.md ├── 0048-updater-branches.md ├── 0043-webrtc-output.md ├── 0049-multiple-video-mixes.md └── 0045-add-notion-of-protocol.md ├── 0000-template.md ├── accepted ├── 0023-change-default-theme.md ├── 0007-color-space-srgb.md ├── 0005-media-controls.md ├── 0020-flatpak.md ├── 0001-conference-calling.md ├── 0014-wayland.md ├── 0004-undo-redo.md ├── 0015-virtual-camera-support.md └── 0019-app-notifications.md └── README.md /text/0048-updater-branches/settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/obsproject/rfcs/HEAD/text/0048-updater-branches/settings.png -------------------------------------------------------------------------------- /text/0049-multiple-video-mixes/output-simple.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/obsproject/rfcs/HEAD/text/0049-multiple-video-mixes/output-simple.png -------------------------------------------------------------------------------- /text/0049-multiple-video-mixes/output-advanced.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/obsproject/rfcs/HEAD/text/0049-multiple-video-mixes/output-advanced.png -------------------------------------------------------------------------------- /0000-template.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | 3 | Simple explanation of feature/changes 4 | 5 | # Motivation 6 | 7 | What problem is this solving? What are the common use cases? 8 | 9 | # Drawbacks 10 | 11 | What is the potential detriment for adding this feature/change? 12 | 13 | # Additional Information 14 | 15 | Any additional information that may not be covered above that you feel is relevant. External links, references, examples, etc. 16 | -------------------------------------------------------------------------------- /text/0033-replay-buffer-save-frontend-event.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | 3 | Add a frontend event to the API that would enable reacting to replay buffer being saved from inside a plugin 4 | 5 | # Motivation 6 | 7 | Currently the API does not give a possibility to react to the event of the replay buffer being saved from a plugin. 8 | Note that the following 4 events are already available in the C frontend API: 9 | 10 | - OBS_FRONTEND_EVENT_REPLAY_BUFFER_STARTING 11 | - OBS_FRONTEND_EVENT_REPLAY_BUFFER_STARTED 12 | - OBS_FRONTEND_EVENT_REPLAY_BUFFER_STOPPING 13 | - OBS_FRONTEND_EVENT_REPLAY_BUFFER_STOPPED 14 | 15 | This one missing event will extend plugin API capabilities allowing for more interaction. 16 | 17 | Implementation: 18 | 19 | Add `OBS_FRONTEND_EVENT_REPLAY_BUFFER_SAVED` to `obs_frontend_event` enum in `` and emit that event in the GUI interaction part 20 | 21 | # Drawbacks 22 | 23 | There should be no drawbacks other than the ABI change; i.e. the fact that unless the new event is appended at the bottom of the enum, all plugins that make use of it need to be recompiled. 24 | 25 | # Additional Information 26 | 27 | None. 28 | -------------------------------------------------------------------------------- /accepted/0023-change-default-theme.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2020-04-22 2 | - RFC PR: #23 3 | - Mantis Issue: N/A 4 | 5 | # Summary 6 | 7 | Change the default theme in OBS to a cleaner, more modern looking and approachable design 8 | 9 | # Motivation 10 | 11 | The current default theme (Dark) is very dated design-wise. It uses a rather small font size and has a lack of padding or margins on most elements. 12 | 13 | While this allows the theme to be very compact, it leads to the interface feeling very cramped and is intimidating to new users due to visual overload. 14 | 15 | The suggested replacement is a new theme I've designed that I've tentatively named Yami. It has a larger font size, more space around elements to let them breathe, and a better color heirarchy (Buttons and elements are brighter than their containers, which are brighter than the background) 16 | 17 | # Drawbacks 18 | 19 | Some users will undoubtedly prefer Dark for it's sleek and compact design. 20 | 21 | As such, I would recommend we keep under a new name such as "Compact" and have this new theme take the name "Dark" 22 | 23 | # Additional Information 24 | 25 | This theme is mostly complete pending anything I've missed, and available for testing on the obs-studio repo. 26 | 27 | Screenshots are located on the PR 28 | -------------------------------------------------------------------------------- /accepted/0007-color-space-srgb.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | 3 | Add sRGB to "Color Space" output options, and default to sRGB. 4 | 5 | # Motivation 6 | 7 | There is often confusion around the "Color Space" option for output, and it has several problems. 8 | 9 | - We don't actually convert colors between RGB color spaces in OBS. We merely tag the metadata. 10 | - Rec. 601 output support is weird. x264 tags 601 as "undef" for everything, and jim-nvenc uses the European 625-line variant of 601 attributes. 11 | - Any PC applications/games that output sRGB are tagged as 601/709, and will be handled incorrectly by color-accurate applications, e.g. old versions of Chrome handled 709 properly before they decided to cheat to save power. 12 | 13 | Defaulting to sRGB is probably the least of all evils. 14 | 15 | - sRGB is identical to Rec. 709 except for the sRGB transfer function, which can be a free conversion for most GPUs. When represented as YCbCr, it makes sense to transform with Rec. 709 matrix coefficients. 16 | - PC applications/games that use sRGB (very common) are now tagged accurately if using sRGB. Video capture feeds that use 601/709 would not be, but we can leave behind 601/709 settings for passthrough scenarios. Actually converting the color data is beyond the scope of this RFC. 17 | - Chrome supports sRGB videos properly on all tested platforms. 18 | - Rec. 601 for output was almost always the wrong choice. 19 | 20 | Implementation: 21 | 22 | - Add VIDEO_CS_SRGB, and plumb into all switch cases. 23 | - Go through output encoder settings, and use sRGB metadata values. 24 | 25 | # Drawbacks 26 | 27 | People that had a working color pipeline with the old faulty settings may have to tweak their setup. 28 | 29 | There is supposedly a performance regression involving DeckLink when switching away from 601. 30 | 31 | # Additional Information 32 | 33 | None. -------------------------------------------------------------------------------- /accepted/0005-media-controls.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2020-01-13 2 | - RFC PR: #6 3 | - Mantis Issue: https://obsproject.com/mantis/view.php?id=329 4 | 5 | - Related PRs: 6 | 1. https://github.com/obsproject/obs-studio/pull/2300 7 | 2. https://github.com/obsproject/obs-studio/pull/2274 8 | 3. https://github.com/obsproject/obs-studio/pull/1803 9 | 10 | # Summary 11 | 12 | Create the ability for users to control the media and VLC sources. 13 | 14 | # Motivation 15 | 16 | Users should be able to have more control over media sources. With a media control widget, they can see how much time is left in the video, seek to a certain point, play/pause, etc. 17 | 18 | # Detailed design 19 | 20 | ## User UX 21 | 22 | - Have the media controls as a single widget under or near the preview. 23 | - Slider: The slider would be used to seek to specific points in the media 24 | - Timer labels: on the sides of the slider (show the length of the media and show the current time of it) 25 | - Buttons: Previous, Play/Pause/Restart, Stop, Next 26 | 27 | - Config button: A dropdown would show up showing several options 28 | 1. Properties (open properties of current source) 29 | 2. Filters (open filters of current source) 30 | 3. Speed (0.25x, 0.5x, 0.75x, 1.0x, etc.), set speed of current source 31 | 4. Loop (sets whether to loop the media) 32 | 33 | ## Other conderations 34 | 35 | - The media source currently doesn't have playlist, play/pause, seek support (PRs mentioned above implement these) 36 | - VLC source needs seek support (https://github.com/obsproject/obs-studio/pull/1803 adds this) 37 | - Ideally the media source should have feature parity with the VLC source 38 | 39 | # How We Teach This 40 | 41 | Have the widget on by default, so users don't have to dig through settings or menus to able it. 42 | 43 | # Drawbacks 44 | 45 | More UI would take up more space. 46 | 47 | # Alternatives 48 | 49 | The VLC source currently has hotkeys to control media. Hotkeys are not ideal because users may not know that they exist. 50 | -------------------------------------------------------------------------------- /accepted/0020-flatpak.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2020-04-18 2 | - RFC PR: #21 3 | - Mantis Issue: N/A 4 | 5 | # Summary 6 | 7 | Flatpak is an app packaging and distribution mechanism widely available on Linux 8 | distributions. To some distributions, Flatpak is part of the default experience. 9 | Flatpak allows tighter control over the environment applications run in, how they 10 | are packaged and installed, and also isolates the application from the host 11 | system. 12 | 13 | The goal is add support for Flatpak, both as a development platform, and also as 14 | a distribution platform. 15 | 16 | # Motivation 17 | 18 | As a complex multi-platform project, OBS Studio has a wide surface for breakage. 19 | Bad packaging can be a real problem, given that many Linux distributions package 20 | OBS Studio in slightly different and incompatible ways. 21 | 22 | By supporting Flatpak, OBS Studio benefits from having a tight control over the 23 | execution environment, and the packaging of plugins and dependencies. This will 24 | reduce the number of moving part when running OBS Studio, which allows much easier 25 | reproduction of bugs and, consequently, fixing them. 26 | 27 | ## Internals 28 | 29 | There are no code changes involved in adding Flatpak support. It is simply a 30 | matter of adding a new file. This file is JSON formatted file containing the 31 | dependencies, permissions, and the platform that OBS Studio depends on. 32 | 33 | # How We Teach This 34 | 35 | Because this is an addition to the platform, users shouldn't be required to learn 36 | about Flatpak. For developers, little will change. 37 | 38 | # Drawbacks 39 | 40 | No known drawbacks. 41 | 42 | # Additional Information 43 | 44 | Supporting Flatpak does not prevent OBS Studio from supporting other distribution 45 | mechanisms, nor will affect Linux distributions that package OBS Studio manually. 46 | 47 | # Unresolved questions 48 | 49 | * Should OBS Studio use Flatpak as part of the CI process? 50 | * Should Flatpak be part of the release process? 51 | -------------------------------------------------------------------------------- /text/0000-fragmented-recording.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | 3 | Fragmented MP4, or fMP4, addresses the drawback of MP4/MOV files requiring "finalisation" by splitting the file into "fragments" that can be read and decoded independently. 4 | This means that if a file is not finalised it is still readable up until the penultimate fragment. 5 | 6 | Functionally this can already be manually done by setting the following custom muxer options: `movflags=frag_keyframe+empty_moov+delay_moov`. 7 | This will fragment the file on keyframes, essentially making the file recoverable up until the last GOP should the file not be finalised (e.g. FFmpeg muxer process crash or unexpected system shutdown). 8 | 9 | Despite this being generally referred to as "fragmented mp4", the same can be applied to MOV, which is a sister format to MP4. 10 | 11 | This RFC proposes making this the default for the MP4/MOV container family. 12 | We may also consider changing the default container from MKV to MP4/MOV if real-world tests show it to be reliable enough. 13 | 14 | # Motivation 15 | 16 | While the current default format MKV is resilient against crashes, it does not have the greatest compatibility. 17 | Most browsers, some video players, and especially many video editors do not support MKV containers (properly). 18 | 19 | There are also some known issues with MKV related to how it stores the video's frame rate that can result in issues with playback/seeking in editors, 20 | as well as potential issues when writing files with a very high bitrate (such as ProRes) on an I/O limited machine (e.g. HDD or network share). 21 | 22 | Additionally, with the upcoming ProRes support generally being expected to use MOV, having a higher resilience for that container as well is desirable. 23 | 24 | Another benefit is that platforms such as YouTube can start transcoding while the file is still being uploaded, akin to a "faststart" mp4. 25 | 26 | # Implementation 27 | 28 | The proposed implementation would show a new checkbox for enabling fragmented recording when MP4 or MOV is selected (default: on). 29 | This checkbox will have a tooltip explaining fragmented recording and includes a notice about remuxing potentially still being required for compatibility with some older software 30 | 31 | If the checkbox is enabled the muxer option `movflags` will be set to `frag_keyframe+empty_moov+delay_moov`, unless the user has already manually specified `movflags`. 32 | 33 | Additionally the current MP4/MOV warning would only be shown if fragmentation is disabled. 34 | 35 | # Drawbacks 36 | 37 | While fragmented MP4 is well supported in most video players, browsers, and editors, some older or more niche software may not fully support it. 38 | 39 | Viewing the raw file via a HTTP URL may be slow in some browser (e.g. Chrome), as they require reading all fragment metadata before playback. 40 | This is especially true if the CDN/Origin do not support range requests. 41 | 42 | Additionally, the fragmented nature of fMP4 can result in a small amount of overhead when compared to a traditional MP4 file. 43 | Although in many cases the file will actually be slightly smaller. 44 | 45 | # Additional Information 46 | 47 | A general introduction to the MP4 format, also converting fMP4, can be found here: https://www.agama.tv/demystifying-the-mp4-container-format/ 48 | -------------------------------------------------------------------------------- /text/0026-hls-ingestion.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | 3 | Add HLS output as one of the live streaming output options in OBS. 4 | 5 | # Motivation 6 | 7 | [HLS](https://tools.ietf.org/html/draft-pantos-hls-rfc8216bis-07) live streaming is supported by a number of platforms. It is a generic output method. FFmpeg has an HLS option as a muxer output: https://ffmpeg.org/ffmpeg-formats.html#hls-2. If a URL is specified, it will stream the output to the destination. Akamai has a [specification](https://learn.akamai.com/en-us/webhelp/media-services-live/media-services-live-encoder-compatibility-testing-and-qualification-guide-v4.0/GUID-6A14ED6D-0A23-4122-AB60-64A49B6628B5.html) for HLS ingestion as well. Many hardware encoders support HLS output. YouTube also supports HLS ingestion for live streaming and its [specifications](https://developers.google.com/youtube/v3/live/guides/hls-ingestion) are compatible with others. 8 | 9 | HLS can support next generation codecs that RTMP does not. Newer codecs can offer much better compression relative to 10 | H.264, allowing users to stream with higher quality for a given bitrate, or stream with the same quality while using a lower bitrate decreasing buffering. 11 | 12 | Currently one can do HLS output using OBS by setting FFmpeg HLS option in the recording output settings and starting to record. This is a bit convoluted though. So the goal of this RFC is to make the HLS output option more user friendly and more accessible. 13 | 14 | # Drawbacks 15 | 16 | HLS output has higher latency than RTMP. 17 | 18 | # Additional Information 19 | 20 | ## UX Changes 21 | 22 | Following the convention of the current settings for RTMP output, we will add an option for "YouTube - HLS". Note that the changes made by this RFC will lay the groundwork for adding a generic HLS output service option. The "YouTube - HLS" option just has the YouTube specific settings such as server URLs etc. 23 | 24 | - Under Settings > Stream > Service dropdown list, add a new option for 25 | “YouTube - HLS”. 26 | - Display a "More Info" button next to the service dropdown list if "YouTube - HLS" 27 | is selected. The button leads to https://developers.google.com/youtube/v3/live/guides/ingestion-protocol-comparison. 28 | - There will be two Server choices: “Primary YouTube ingest server” and 29 | “Backup YouTube ingest server”. The primary server is for primary 30 | ingestion and the backup server is for backup ingestion. 31 | - Users will copy the Stream Key from YouTube’s creator Studio similar to 32 | RTMP. See detailed instructions in Section 4 “Connect your encoder and go 33 | live” from 34 | “[Create a live stream with an encoder](https://support.google.com/youtube/answer/2907883?hl=en)”. 35 | - Rename the existing “YouTube / YouTube Gaming” option in the Service 36 | dropdown list to “YouTube - RTMP”. 37 | ![YouTube - HLS service option](https://user-images.githubusercontent.com/233044/85955326-c1016000-b94b-11ea-8781-6027768c629b.png) 38 | 39 | ## Implementation 40 | 41 | FFmpeg already has HLS as a muxer output option: https://ffmpeg.org/ffmpeg-formats.html#hls-2, so we can use the existing 42 | FFmpeg implementation via the [FFmpeg output](https://github.com/obsproject/obs-studio/blob/master/plugins/obs-ffmpeg/obs-ffmpeg-output.c) plugin. The options for HLS output in FFmpeg are very comprehensive including segment duration, playlist size and HTTP user agent etc., which should be able to meet the requirements of most services accepting HLS ingestion. 43 | 44 | For this RFC, we are going to add support for HLS with H.264. This will lay the groundwork for adding support for additional codecs in the future. 45 | -------------------------------------------------------------------------------- /accepted/0001-conference-calling.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2019-11-15 2 | - RFC PR: #2 3 | - Mantis Issue: N/A 4 | 5 | # Summary 6 | 7 | Implement an over the web conference calling system that can be joined through a web URL, with separate video and audio feeds for each participant in OBS. 8 | 9 | # Motivation 10 | 11 | Currently the only option for easily bringing in separate video and audio feeds of a conference call into OBS is with Skype outputting NDI, and being picked up by the obs-ndi plugin. This solution requires the user to install a plugin that doesn't ship with OBS, and to use Skype which puts a watermark over the NDI feed. The solution is also almost entirely closed source. 12 | 13 | Having an open source and peer to peer system that can directly ingest into OBS would allow for much easier collaboration and remote workflows in OBS. 14 | 15 | # Detailed design 16 | 17 | The calling should be done over WebRTC, so that anyone with a modern web browser will be able to simply receive a link in order to join a call. As WebRTC is a big binary on it's own, the system in OBS should utilize a compatible but lightweight library. Since we'll want to be dealing with data directly, this can be achieved with either [librtcdc](https://github.com/xhs/librtcdc) or [librtcdcpp](https://github.com/chadnickbok/librtcdcpp), which implement only the data channel parts of the WebRTC standard. 18 | 19 | In Tools, there should be a menu item for creating calls. On clicking the menu item, a dialog should come up with an obsproject.com URL that can be copied to people who will participate in the call, an option to pick a webcam and mic to transmit back to the participants, and an option to mute the call mix going out to desktop audio. The OBS client will then communicate with a server running on obsproject.com to wait for clients to peer with it using WebRTC. 20 | 21 | When a participant opens the URL and grants permissions, it should communicate with the OBS client its capabilities (video, audio, screenshare, etc), and wait for the OBS client to request a media stream. When the client is requested to provide a stream, the webpage should start a MediaRecorder() instance and send the raw byte data over the WebRTC data channel. 22 | 23 | OBS will receive the video and audio over the WebRTC data channel, and send it to libavcodec to decode. It will then provide a video feed back over the data channel to all users that switches between cameras based on who is speaking. This will contain an audio mix of all participants. 24 | 25 | OBS should add a source for displaying one of these decoded feeds. This should contain a drop down to pick the source to pull into OBS, containing the following options. 26 | - Active Speaker, that will contain a mix of everyones audio, and will automatically switch video based on who is talking. 27 | - Active Remote Speaker, that does the same as above, but will not show the local feed. 28 | - Separate feeds for each of the participants in the call. 29 | 30 | The server hosted by OBS Project should also be open source. It will not handle broadcasting any of the video itself, merely facilitate the handshake between peers. 31 | 32 | # How We Teach This 33 | 34 | We will definitely need a guide written about using this system. A video will have to be made if this releases as part of an update. The source for call participants should have a label pointing people to the menu in tools to create a call, as a means of discoverability. 35 | 36 | # Drawbacks 37 | This requires OBS Project to host a server to facilitate P2P handshakes. Calling will be down if the host for the site is ever down, since a URL needs to be generated. Serving static HTML that can be cached for an entire directory may be a solution for this, if the server doesn't have to be active to handle the peering. 38 | 39 | # Alternatives 40 | Compiling OBS with the full WebRTC library and using the video channel instead. Another option is to provide a solution based around the existing browser source (although this may result in several more WebRTC connections to display feeds). Alternatively, we could find an existing open source conferencing program with a compatible license that could be integrated into OBS directly. -------------------------------------------------------------------------------- /accepted/0014-wayland.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2020-03-09 2 | - RFC PR: #14 3 | - Mantis Issue: N/A 4 | 5 | # Summary 6 | 7 | Add native Wayland support for OBS Studio 8 | 9 | # Motivation 10 | 11 | Wayland is a big and widespread alternative to the X11 protocol. Most desktop 12 | environments on Linux support Wayland now, and it is the default on GNOME. 13 | There are also many Wayland-only window managers being created every day. 14 | 15 | OBS Studio has X11-specific code, which makes it incompatible with Wayland. 16 | 17 | # Detailed design 18 | 19 | This RFC covers both making OBS Studio be able to run as a native Wayland 20 | application, and also adding screen and window captures that work on Wayland 21 | compositors. 22 | 23 | ## Native Client (✓ done) 24 | 25 | I hereby propose a 3-step approach to achieve native Wayland integration: 26 | 27 | 1. Introduce an EGL-X11 renderer to `libobs-opengl` (✓ done) 28 | 2. Add the concept of a Wayland platform to `libobs` (✓ done) 29 | 3. Introduce an EGL-Wayland renderer to `libobs-opengl` (✓ done) 30 | 31 | Optionally, a 4th step may be: 32 | 33 | 4. Add an experimental checkbox to the Configuration dialog to enable EGL; 34 | 35 | A detailed explanation of each step follows below. 36 | 37 | ### EGL-X11 (✓ done) 38 | 39 | This step involves adding an abstraction layer to the windowing system, renaming 40 | `libobs-opengl/gl-x11.c` to `libobs-opengl/gl-glx.c`, and introducing a new 41 | `libobs-opengl/gl-egl-x11.c` file. 42 | 43 | The `glad` dependency would also be updated to have the necessary EGL files. 44 | 45 | The abstraction layer is composed of `libobs-opengl/gl-nix.{c|h}`. This is where 46 | the exported functions (i.e. everything included in `struct gs_exports`) are 47 | implemented. All these function implementations, however, will look like this: 48 | 49 | ```c 50 | extern void gl_foo_bar(...) 51 | { 52 | gl_winsys->foo_bar(...); 53 | } 54 | ``` 55 | 56 | That's because `gl_winsys` is selected runtime, and may be either the GLX winsys, 57 | or the EGL/X11 winsys. 58 | 59 | ### Wayland Platform (✓ done) 60 | 61 | The Wayland Platform step introduces a way to detect at runtime which platform 62 | OBS Studio is running on. Initially, only Unix implementations will set and 63 | actually use it. 64 | 65 | The header file would look like: 66 | 67 | **libobs/obs-nix-platform.h** 68 | 69 | ```c 70 | enum obs_nix_platform_type { 71 | OBS_NIX_PLATFORM_X11_GLX, 72 | OBS_NIX_PLATFORM_X11_GLX, 73 | OBS_NIX_PLATFORM_WAYLAND, 74 | }; 75 | 76 | EXPORT void obs_set_nix_platform(enum obs_nix_platform_type platform); 77 | EXPORT enum obs_nix_platform_type obs_get_nix_platformobs_get_nix_platform(void); 78 | 79 | EXPORT void obs_set_nix_platform_display(void *display); 80 | EXPORT void *obs_get_nix_platform_display(void); 81 | ``` 82 | 83 | ### EGL-Wayland (✓ done) 84 | 85 | The 3rd and last step is introducing the `libobs-opengl/gl-egl-wayland.{c|h}` 86 | files, and making `gl-nix` retrieve the EGL/Wayland winsys. This is the only 87 | winsys that can be used when running under Wayland. 88 | 89 | ## Screen & Window Capture 90 | 91 | The process for capturing screen and window contents on Wayland compositors 92 | is different than on Xorg. While X11 gives applications the ability to spy on 93 | anything, including other applications, the entire compositor, Wayland proposes 94 | a much stricter security model. 95 | 96 | ### Portals 97 | 98 | On Wayland, the general way of requesting the compositor for sharing the screen 99 | or window contents is through the [ScreenCast portal](screencast-portal). 100 | 101 | Portals are D-Bus interfaces that applications use to interact with the desktop. 102 | The ScreenCast portal specifically provides applications the necessary tools to 103 | ask the desktop environment to share the contents of a window or a monitor. 104 | 105 | Known to implement the ScreenCast portal is [GNOME](gnome-screencast), 106 | [KDE](kde-screencast), and [wlroots](wlroots-screencast). These three should 107 | cover the majority of Wayland compositors in existence today. 108 | 109 | ### PipeWire 110 | 111 | The ScreenCast portal is written based on [PipeWire](pipewire) to work. PipeWire 112 | is a service that provides a robust and performant media sharing mechanism for 113 | video and audio. 114 | 115 | There already is an out-of-tree plugin, [obs-xdg-portal](obs-xdg-portal), that 116 | implements this. It can serve as a basis for either a new in-tree plugin, or 117 | be incorporated as part of the `linux-capture` plugin. 118 | 119 | # How We Teach This 120 | 121 | N/A, this simply changes the current behaviour. As long as the patch notes 122 | properly describe the changes, we should be fine. 123 | 124 | # Drawbacks 125 | 126 | This doesn't change the default behavior of OBS Studio, which reduces the 127 | surface area for regressions. 128 | 129 | # Alternatives 130 | 131 | Not supporting Wayland? 132 | 133 | # Unresolved questions 134 | 135 | * Should the capture code be a new in-tree plugin, or incorporated into `linux-capture`? 136 | 137 | 138 | 139 | [gnome-screencast]: https://github.com/flatpak/xdg-desktop-portal-gtk/blob/master/src/screencast.c 140 | [kde-screencast]: https://github.com/KDE/xdg-desktop-portal-kde/blob/master/src/screencast.cpp 141 | [obs-xdg-portal]: https://gitlab.gnome.org/feaneron/obs-xdg-portal/ 142 | [pipewire]: https://pipewire.org/ 143 | [screencast-portal]: https://github.com/flatpak/xdg-desktop-portal/blob/master/data/org.freedesktop.impl.portal.ScreenCast.xml 144 | [wlroots-screencast]: https://github.com/emersion/xdg-desktop-portal-wlr/blob/master/src/screencast/screencast.c 145 | -------------------------------------------------------------------------------- /accepted/0004-undo-redo.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2020-01-03 2 | - RFC PR: #5 3 | 4 | 5 | **Table of Contents** 6 | 7 | - [Summary](#summary) 8 | - [Motivation](#motivation) 9 | - [Design](#design) 10 | - [UX](#ux) 11 | - [Functionality](#functionality) 12 | - [Core Functionality](#core-functionality) 13 | - [Memory](#memory) 14 | - [v1](#v1) 15 | - [v2](#v2) 16 | - [Specification](#specification) 17 | - [Requirements](#requirements) 18 | 19 | 20 | 21 | 22 | # Summary 23 | Create a undo system that is capable of recording the state of OBS actions to 24 | easily revert the state forwards or backwards in time. 25 | 26 | # Motivation 27 | When working with large and intricate processes - whether it be scenes, sources, 28 | etc. - accidents are bound to happen. Currently, there is no way to undo 29 | committed actions. As a result, work can easily be lost at the expense of hours 30 | of effort. 31 | 32 | Implementing a proper undo/redo system would be a drastic increase in 33 | quality of life for the majority of users. 34 | 35 | # Design 36 | ## UX 37 | The UX would model that of most other applications. Under edit, there would be 2 38 | options: 'Undo' and 'Redo'. Appended to these actions, is a descriptor of what 39 | action is being (re/un)done alongside on what object. When an action is committed, 40 | for instance deleting a scene, the undo button becomes activated, allowing the 41 | user to select it. 42 | 43 | 44 | When activated, it undos the action and then activates the Redo button, which 45 | allows the user to essentially 'undo' their previous undo. If the user makes 46 | multiple undos, then the users would also have multiple redos available. 47 | 48 | When a user undos and then commits another action, the redos are no longer 49 | possible as the tree has diverged into a new path. 50 | 51 | Also, like many other applications, the undo/redo keys should be binded to Ctrl-Z 52 | and Ctrl-Y 53 | 54 | ## Functionality 55 | ### Core Functionality 56 | The undo/redo system work by saving states. However, saving the entire state of 57 | OBS would use up more memory than necessary when a small action is committed. In 58 | order to prevent this, only the state of the object the action was committed on 59 | would be saved, alongside the action committed. This will allow more seamless 60 | integration into OBS without disturbing the flow, and also keep the amount of 61 | memory low. 62 | 63 | Note, the same methodology would work for the redo system. 64 | 65 | The method to create this list would be through the use of two dequeues. This 66 | allows seamless popping and inserting from the front of the list. This would 67 | essentially act lack a stack. 68 | 69 | The object that would go into the dequeues is an undo_redo\_t which contains a 70 | callback function to be called when undo/redo is activated. Several functions 71 | have cleanup that is needed to be done, so a third callback function can be 72 | specified that will call when the object is either cleared from the redo, or 73 | the undo\_stack goes out of scope and gets cleaned up. 74 | 75 | ### Memory 76 | The current design of only saving the state of the local area is intended to 77 | reduce memory consumption. However, over the course of a long lifetime, lots of 78 | memory would eventually be used by having a long tree. After a certain length n 79 | (perhaps customizable?) the end of the chain should be saved to disk and removed 80 | on close. 81 | 82 | Having disk read/write operations for each action would be costly. In 83 | order to reduce that load it should only save/read in sets of n when 84 | it needs to. 85 | 86 | ### v1 87 | Note: Items checked off are implemented in POC. 88 | The core scope should include: 89 | - [X] Scenes 90 | - [X] Add 91 | - [X] Delete 92 | - [X] Rename 93 | - [X] Duplication 94 | - Transitions 95 | - Add 96 | - Remove 97 | - Properties 98 | - [X] Sources 99 | - [X] Add 100 | - [X] Delete 101 | - [X] Rename 102 | - [X] Transforming 103 | - [X] Scene Collections 104 | - [X] Add 105 | - [X] Removal 106 | - [X] Switching 107 | - [X] Rename 108 | - Audio 109 | - Volume Settles (i.e stops after time) 110 | - [X] Filters 111 | - [X] Add 112 | - [X] Remove 113 | - [X] Rename 114 | - [X] Properties 115 | 116 | ### v2 117 | - Order Changes 118 | - Grouping 119 | - Property Changes 120 | - Changes occur in local undo stack, then gets simplified to single undo in main. 121 | - Other Small changes that could positively impact UX 122 | 123 | ### Specification 124 | - Properties 125 | - On Save/Cancel 126 | - Transforms 127 | - Mouse Release 128 | - On Close 129 | - Grouping 130 | - Insert 131 | - Removal 132 | 133 | # Requirements 134 | This section is a clear guideline on what successful completion is. This feature 135 | can be split up into two segments, necessities, and benefits. The necessities 136 | implement undo/redo for core features where users spend their time making adjustments, 137 | such as scenes and sources. Benefits are undo/redo for items that are not strictly necessary, 138 | but would be nice on the user side, such as undoing drag and drop order. Successful completion must include all 139 | items outlined in the scope, **at a minimum**. 140 | 141 | On top of having the core features implemented, they must be thoroughly tested to 142 | ensure consistent validity and cohesion. It should be tested enough 143 | to ensure that all data retains its integrity and can not corrupt data. It must 144 | also be memory safe and not cause any memory leaks. It should also be minimally invasive to the rest 145 | of OBS, as to not cause any other UB or bugs in the application as a whole. 146 | -------------------------------------------------------------------------------- /text/0048-updater-branches.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | 3 | Changes to add the option for Windows users to opt into branches containing unstable/beta releases. 4 | 5 | # Motivation 6 | 7 | Currently, Betas/RCs only reach a small number of users and require manual download and installation (outside of Steam). 8 | 9 | Additionally, we may also occasionally provide branches to test specific fixes without releasing a full Beta/RC or manually sending builds to users. 10 | 11 | # Changes 12 | 13 | ## Server-side 14 | 15 | To make branches accessible, a new file would be added next to the existing `manifest.json`, called `branches.json`, the content of the file would describe available branches. 16 | 17 | Example: 18 | ```json 19 | [ 20 | { 21 | "name": "beta", 22 | "display_name": "Beta / Release Candidates", 23 | "description": "Semi-stable test builds", 24 | "enabled": false, 25 | "visible": true, 26 | "macos": true, 27 | "windows": true 28 | }, 29 | { 30 | "name": "nightly", 31 | "display_name": "Nightly", 32 | "description": "Unstable/Unsigned nightly builds", 33 | "enabled": true, 34 | "visible": true, 35 | "macos": false, 36 | "windows": true 37 | }, 38 | { 39 | "name": "test_1234", 40 | "display_name": "Test, Issue #1234", 41 | "description": "Test Build to Fix issue #1234", 42 | "enabled": true, 43 | "visible": false, 44 | "macos": false, 45 | "windows": true 46 | } 47 | ] 48 | ``` 49 | 50 | The display name of a branch may be any valid UTF-8 character sequence. The `name` property shall only use lowercase ascii characters a-z, 0-9, and "_". 51 | 52 | The name of the manifest file for a branch would be constructed as `manifest_{name}.json` and expected to be in the same location as the default manifest. 53 | The default branch ("stable") must not be specified and is hardcoded to be always present. 54 | 55 | The purpose of the `enabled` flag here is to steer clients away from a branch, without deleting it from the list entirely. 56 | For example, a user who opted into receiving release candidate builds would receive builds from the default branch while the RC branch is disabled, but would then again receive release candidate builds once the branch has been re-enabled. In the UI this could be represented by a note in the dropdown such as " *(Disabled, using default branch)*". 57 | If a selected branch is deleted entirely, OBS should fall back to the default branch, notifying the user via a simple dialog box. 58 | 59 | Additionally, we may consider using the `visible` flag to provide hidden branches that can only be selected via config or manual name entry, for example when providing builds with targeted fixes to a larger number of users without manually distributing test builds. 60 | 61 | For patches and binaries of several branches to co-exist on the server the file structure may have to be changed slightly: 62 | 63 | Update files: 64 | - **Current:** `https://cdn-fastly.obsproject.com/update_studio//` 65 | - **Proposed:** `https://cdn-fastly.obsproject.com/update_studio///` 66 | 67 | Delta patch files: 68 | - **Current:** `https://cdn-fastly.obsproject.com/patches_studio///` 69 | - **Proposed:** `https://cdn-fastly.obsproject.com/patches_studio////` 70 | 71 | ## UI 72 | 73 | ### Visual 74 | 75 | Add a new section with a dropdown to switch the update "channel". The dropdown shows the name and description of the branch. 76 | After applying a change in branch, closing the settings window will trigger a check for updates on the selected branch (if enabled). 77 | 78 | ![Settings Example](./0048-updater-branches/settings.png) 79 | 80 | ### Code 81 | 82 | - (Windows) Before fetching the `manifest.json` the UI would also need to request `branches.json` and then decide which manifest file to fetch 83 | - (macOS) Branches will need to be fetched and validated before starting Sparkle's update check 84 | - In case the selected branch is disabled, it should fall back to the default branch 85 | - The selected branch needs to be added to the command line arguments used for the updater 86 | - The available branches are kept track of inside `OBSApp` so they can be accessed via `App()->GetUpdateBranches()` 87 | 88 | ## Updater (Windows) 89 | 90 | The selected branch would be passed as an argument to the updater and used when constructing the download URL. Additionally the branch will be included as a URL parameter when requesting a list of delta patches from the server. 91 | 92 | ## Sparkle (macOS) 93 | 94 | In Sparkle 2 we can add a channel to each appcast item via `beta` and then set the currently opted-in channel via `allowedChannelsForUpdater`. 95 | 96 | To prevent issues with older Sparkle versions we may have to use a separate RSS feed from the current ones. In doing so we could merge feeds for Apple Silicon and Intel builds and use channels to differentiate them, e.g. use `_(arm64|amd64)` as the channel name. 97 | 98 | # Drawbacks 99 | 100 | - Due to how the current updating process works this process still requires manually signed and uploaded builds. So nightlies straight from CI are not an option without further changes. 101 | - Users may opt into updates without fully understanding the impact, and run into issues when presented with a Beta/RC build. 102 | - (Windows) Rolling back to an older release may leave the OBS directory in an "unclean" state with future plugins/libraries not removed 103 | + This could be partially mitigated by https://github.com/obsproject/obs-studio/pull/6916 104 | 105 | # Additional Information 106 | 107 | - Sparkle documentation: https://sparkle-project.org/documentation/api-reference/Protocols/SPUUpdaterDelegate.html 108 | - Pull Request (Windows): https://github.com/obsproject/obs-studio/pull/6907 109 | - Pull Request (macOS): https://github.com/obsproject/obs-studio/pull/7723 110 | -------------------------------------------------------------------------------- /text/0043-webrtc-output.md: -------------------------------------------------------------------------------- 1 | # OBS WebRTC Output 2 | 3 | 4 | # Summary 5 | 6 | OBS should expose APIs to service plugins that allow configuring WebRTC as an output transport. 7 | 8 | # Motivation 9 | 10 | WebRTC is a popular method for transmitting live video. Services such as Millcast, Wowza, Janus, and Caffeine have added WebRTC ingest support. In order to support these services there are at least two forks of OBS. This provides a worse experience for the end user and wastes development work maintaining these forks. 11 | 12 | Exposing the API prevents service plugins from depending on behavior in the underlying WebRTC library implementations. This allows plugins to be updated less often and prevents them from breaking between OBS releases. It also reduces final plugin size. 13 | 14 | WebRTC does not specify how clients should exchange SDP messages, leaving it up to each service. Each service plugin will be responsible for implementing a version of this for how their service works. To aid this I suggest shipping more APIs that let them use OBS managed Websockets and http calls. 15 | 16 | The [WebRTC fork](https://github.com/CoSMoSoftware/OBS-studio-webrtc) of OBS has code quality issues. The output plugins it provides are directly coupled to libwebrtc. They also are not isolated from each other and code for multiple services is shared in the same file. This makes shipping them separately or from a plugin manager impossible. 17 | 18 | # Implementation Details 19 | 20 | ## WebRTC library 21 | 22 | OBS already has ways to capture and encode audio and video making a library like libwebrtc very bloated for our uses. I propose we use the following library: 23 | 24 | **[amazon-kinesis-video-streams-webrtc-sdk-c](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c)** 25 | 26 | * Written in C so it can integrate nicely with libobs. 27 | * Supports mbedTLS which OBS already uses so we have one less dependency to ship. 28 | * Very small, sub 200k library size. I built a test app with the library and its dependencies statically linked and it was around 4 MB. 29 | 30 | ## Signaling 31 | 32 | Services should have multiple choices for implementing the signaling layer. 33 | 34 | ### Websockets 35 | The most common signaling protocol. We should provide Wrapper functions around libwebsocket to help plugins safely send and receive messages. 36 | 37 | ### HTTPS 38 | OBS has a curl dependency, we can either expose this directly or provide wrapper functions. 39 | 40 | ### Whip 41 | Whip greatly simplifies the signaling layer, it only requires a POST request with an SDP and an Authorization header with a bearer token. 42 | 43 | 44 | 45 | 46 | Usage examples: 47 | * Use WHIP with just the services file and no code. 48 | * Provide an easy way for services to do oauth that we can use with WHIP. 49 | * Let services call WHIP with their own auth flow / code 50 | 51 | ### Scripting? 52 | We should investigate the possibility of allowing services to implement their signaling layer with the current OBS scripting system. Allowing services to ship a single Python or Lua file might be preferable than only allowing C/C++ shared library based plugins. 53 | 54 | ## New Public APIs 55 | 56 | ``` 57 | /** 58 | Pass the current encoder settings to the rtp tranceiver settings in the webrtc library. This will make sure the local SDP has the correct codecs. 59 | */ 60 | obs_webrtc_configure_tranceiver(obs_service_t *service, const obs_encoder_t *encoder); 61 | 62 | /** 63 | Simple enum to say if an SDP is an offer or an answer 64 | */ 65 | enum sdp_type {offer, answer}; 66 | 67 | /** 68 | Set the remote SDP for a webrtc connection. This is a value that the remote server will give you during signaling. 69 | */ 70 | obs_webrtc_set_remote_description(obs_service_t *service, sdp_type type, const char *sdp); 71 | 72 | /** 73 | If the remote server has sent us an offer remote description this can be called to generate the SDP to respond with. 74 | */ 75 | char* sdp = obs_webrtc_create_answer(obs_service_t *service); 76 | 77 | /** 78 | Set the SDP we use locally, generally returned from `obs_webrtc_create_answer` 79 | */ 80 | obs_webrtc_set_local_description(obs_service_t *service, sdp_type type, const char *sdp); 81 | 82 | /** 83 | With whip we will need to generate an offer first 84 | */ 85 | char* sdp = obs_webrtc_create_offer(obs_service_t *service); 86 | ``` 87 | 88 | # Concerns / Questions 89 | * Should we respect picture loss packet (PLI) or should OBS send keyframes as it already does? 90 | * Are there any issues being SDP based? ex. media soup requires an adapter to use SDP 91 | 92 | # Q&A 93 | 94 | ## What does this mean for me as a plugin author? 95 | Your plugin will not link against the chosen WebRTC library directly. OBS will provide functions to get an SDP to send to a remote service and also a call to consume an SDP from the remote service. We will also ship APIs to allow your plugin to communicate over a Websocket or make HTTP calls without having to link against curl or a Websocket library. 96 | 97 | # Additional Information / Notes 98 | 99 | ## Open source webrtc libraries 100 | * **libwebrtc** - The library that everyone uses moves fast and a large codebase but does everything. 101 | * **[webrtc-rs](https://github.com/webrtc-rs/webrtc)** - A rust implementation of webrtc based on pion. 102 | * **[pion](https://github.com/pion/webrtc)** - Golang webrtc library very modular 103 | * **[amazon-kinesis-video-streams-webrtc-sdk-c](https://github.com/awslabs/amazon-kinesis-video-streams-webrtc-sdk-c)** - C based webrtc library 104 | 105 | ## Open source Websocket libraries 106 | 107 | * **[libwebsockets](https://libwebsockets.org)** - Pure c web socket library 108 | * **[WebSocket++](https://www.zaphoyd.com/projects/websocketpp/)** - Header only C++ library that implements RFC6455 (The WebSocket Protocol) 109 | * **[Boost Beast](https://www.boost.org/doc/libs/1_76_0/libs/beast/doc/html/beast/using_websocket.html)** - Beast is a C++ header-only library serving as a foundation for writing interoperable networking libraries by providing low-level HTTP/1, WebSocket, and networking protocol vocabulary types and algorithms using the consistent asynchronous model of Boost.Asio. 110 | 111 | 112 | -------------------------------------------------------------------------------- /accepted/0015-virtual-camera-support.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2020-03-25 2 | - RFC PR: #15 3 | - Related Github Issue: https://github.com/obsproject/obs-studio/issues/2568 4 | 5 | # Summary 6 | 7 | Add the ability to render output to a virtual camera device on Linux, Mac, and Windows. 8 | 9 | # Motivation 10 | 11 | OBS is a powerful set of tools to manipulate live video streams that natively supports output to popular streaming services as well as rendering to a local video file. There are a huge number of people who engage in 1:1 or small-scale streaming using video conferencing software like Zoom or Google Hangouts. Many of these people have similar needs to those of traditional streamers, but for one reason or another cannot switch the video conferencing software they use (social graph, corporate policy). 12 | 13 | OBS can meet this need by creating a virtual camera device and outputting to that device such that programs capable of consuming camera input can now consume OBS input with no extra development. 14 | 15 | Similar functionality: 16 | * OBS can be extended by the [OBS-VirtualCam plugin](https://obsproject.com/forum/resources/obs-virtualcam.539/) (Windows) and [obs-v4l2sink plugin](https://github.com/CatxFish/obs-v4l2sink) (Linux). Currently, there is no OBS solution for macOS. 17 | * [Wirecast includes virtual camera support](http://www.telestream.net/pdfs/user-guides/Wirecast-8-User-Guide-Windows.pdf) on both Windows and Mac. 18 | * [Webcamoid](https://webcamoid.github.io/) is an existing open-source cross-platform virtual camera application, which may have some useful tidbits when investigating implementation details for each platform. 19 | 20 | # Detailed design 21 | 22 | Feature Set: 23 | 24 | * A button is added to the Output Controls dock labeled "Start Virtual Camera", which will toggle the start/stop status of the virtual camera output. While virtual camera output is active, the label should read 'Stop Virtual Camera". 25 | * The virtual camera appears in the system listed as "OBS Virtual Camera". 26 | * The virtual camera outputs at the same resolution and framerate as the OBS rendered output. 27 | * A section is added to OBS settings providing the following settings: 28 | * A checkbox to indicate if the user wishes to start the virtual camera output automatically when OBS starts 29 | * A checkbox to flip the virtual output horizontally 30 | * In the Output settings tab, the warning that appears when outputs are active is augmented to list all active outputs so that users know which outputs they need to disable in order to edit output settings. 31 | * When the virtual camera output is not active, the virtual camera device shows a static image to consumers indicating to the consumer that output is not started. This image meets four requirements: 32 | * It allows the consumer to smoothly switch from non-active to active content without having to worry about what order the applications are started in. 33 | * It informs the user that the output is not started yet, and needs to be started before output will be sent from OBS. 34 | * It is aesthetically pleasing such that, if the output is accidentally sent to viewers (for example, if the user fails to start the virtual output before joining a Zoom call), it is not unnecessarily gaudy or technical. 35 | * It requires minimal localization, or provides a way to localize any displayed text on the image 36 | * The auto-config tool is updated to include "outputting to a virtual camera" as their primary use case, with reasonable recommendations. Since this doesn't require testing encoding or bandwidth settings, the resolution should be set to 1920x1080 and frame rate set to 30FPS. 37 | * A new command line flag `--startvirtualcam` is added, which starts the virtual cam automatically when OBS is started. 38 | 39 | ## Platform specific implementations 40 | 41 | * On **Windows**, the implementation of this plugin should leverage [libdshowcapture](https://github.com/obsproject/libdshowcapture). OBS already depends on libdshowcapture, so using this existing library service limits the need to add additional dependecies. 42 | * On **macOS**, OBS will need a plugin to output over IPC to a CoreMediaIO DAL plugin that is registered on the system upon OBS install. This will require repackaging the installer as a `.pkg` instead of a compressed `.app`. 43 | * On **Linux**, the most straightforward way to implement this functionality would be to output via [v4l2sink](https://gstreamer.freedesktop.org/documentation/video4linux2/v4l2sink.html?gi-language=c). Note that this still requires the user to have [v4l2loopback](https://github.com/umlaeute/v4l2loopback) installed to consume this output. For package installations, a dependency should be placed on v4l2loopback such that it is installed by the package manager when OBS is installed. 44 | 45 | ## Technical considerations 46 | 47 | - This RFC is explicitly for an OBS-specific implementation of output from OBS to a virtual camera. A generic middleware solution to this problem is out of scope for this problem. 48 | - The virtual camera will need to be registered with the system upon OBS installation. 49 | - The virtual camera should be accessible by third party applications that support webcams/capture cards. 50 | - OBS supports running multiple instances of itself. If two instances of OBS run simultaneously, the first instance to output to the device should "win" the race to communicate to the virtual camera device. The second instance should give an error saying that the output device is already in use by another instance of OBS, if possible. 51 | 52 | ## User UX 53 | 54 | * When installing, OBS should install the platform specific driver to enable virtual camera output on the system 55 | * Initial configuration should require very little input from the user, if any. Resolution and framerate of the camera will be pre-defined by the Video settings of OBS, and no effects outside those already provided by OBS will be specially exposed for this output. 56 | * Add a button in the OBS UI below "Start Recording" and above "Studio Mode". In English localization, the button would be labeled "Start Virtual Camera". 57 | * When you click this button, the current output is directed to the virtual camera device. The button text then toggles to the localized "Stop Virtual Camera" label. 58 | * When the user clicks into the settings of their video conferencing app, they will see an entry in the list of cameras labeled "OBS Virtual Camera" (no localization). When they choose this, the OBS output will be fed from this "device" to the app. 59 | * When the user clicks "Stop Virtual Camera", the virtual device no longer receives frames from OBS, and instead receives a static image defined by OBS indicating that the output is not active. 60 | 61 | # Drawbacks 62 | 63 | * The most obvious omission from this RFC is the lack of virtual audio output in addition to virtual camera output. However, though the features are logically closely related, they are unfortunately vastly different technically speaking. As such, we will leave the implementation of virtual audio output to [a separate RFC](https://github.com/obsproject/rfcs/pull/16). 64 | * Platform-specific driver code would require maintenance as operating systems (especially in more recent times) tend to limit access (or require user permission) to access/provide to AV hardware. This especially will mean more testing/debugging will be required each time Windows, macOS, and officially supported Linux flavours (currently Ubuntu) provide a major update. 65 | 66 | # Alternatives 67 | 68 | * Pursue a more generic output interface and (encourage someone else to?) build a generic virtual camera to consume it. E.g. output via RTSP, NDI and have a "NDI Cam" that has nothing to do with OBS. 69 | * Adapt Webcamoid's source code for our use. This may end up being more involved than we'd like, however, and might end up making the feature more complicated than we want. 70 | -------------------------------------------------------------------------------- /accepted/0019-app-notifications.md: -------------------------------------------------------------------------------- 1 | - Start Date: 2020-04-17 2 | - RFC PR: #19 3 | - Mantis Issue: N/A 4 | 5 | # Summary 6 | 7 | A method for plugins and scripts to notify the user about important or time-critical information. I'll use the term "Alerts" for the rest of the RFC to cover this functionality. This RFC is mostly around the backend design, with elements of the frontend and UX considerations. 8 | 9 | The end goal is an alert widget of some description, with a panel/window for in-depth and historical alerts. 10 | 11 | # Motivation 12 | 13 | Internally, OBS does a lot of things at once, and any one of them can fail for a variety of reasons. Currently, the only way for a plugin to document that something went wrong is to log it. Then, someone with more experience requests and reads the log, to then provide the user with an actual solution. This is a whole lot of unnecessary work for both the user (who is more likely to give up) and our support team. 14 | 15 | ## Example Alerts 16 | 17 | > **Some sources failed to load.** It looks like a plugin may be missing. Read more. 18 | 19 | > **Some sources are pointing to missing files.** They may have been moved or deleted. List problematic sources. 20 | 21 | > **Output aspect ratios don't match.** This can result in stretched recordings. Fix it. 22 | 23 | > **Mismatched Sample Rate found** This can result in audio desync, crackling, etc. Fix it. 24 | 25 | > **You just finished a 4 hour stream.** We detected some issues that can be fixed. Check via the analyzer. 26 | 27 | > **You were disconnected from your streaming service.** You can read our troubleshooting guide for common solutions. 28 | 29 | > **OBS is having trouble maintaining 60 fps.** You can solve this by running OBS as Administrator. Not sure how? 30 | 31 | > **An OBS update is available!** You can update manually at any time. Read the full list of changes here. 32 | 33 | > **Display Capture requires extra configuration.** You can follow our Laptop Guide here. 34 | 35 | > **Issues going live?** You can keep an eye on @TwitchSupport for realtime updates. 36 | 37 | # Detailed design 38 | 39 | Alerts can have multiple levels based on urgency. Critical, Warning, and Info. In the main window, Alerts are shown one at a time rather than stacking. They can either be dismissed automatically (for example, when an issue goes away or the user fixes the issue manually), or by dismissing it. 40 | 41 | ## UX Notes 42 | 43 | Visually, the Alert will have a coloured background depending on the priority, and within the container an icon to represent the priority, the summary, an 'expand' button to show the full description, and a dismiss button. If expanded, we can provide a direct link to a full list of Alerts. This could either be shown above the preview, or even as part of the status bar. 44 | 45 | A full, scrollable, list of active Alerts can be viewed in a secondary window, including Alerts that the user dismissed but are still active (as in, the problem hasn't been solved). From here, the user can explicitly decide to enable/disable alerts from specific plugins. For example, "never show invalid audio device errors". This page could also have methods of filtering/sorting by priority/date, etc. 46 | 47 | The summary should be capable of HTML-anchor style actions, allowing the capability to open external webpages or even open specific windows within OBS - say, the Properties window for a specific source, or the Settings window pointed to a specific tab with a certain setting highlighted. 48 | 49 | Note: This Alert system would not expose a way to pop up warning/error dialogs. That is not its purpose. 50 | 51 | ## Internals 52 | 53 | The Alerts framework would likely have to be built as part of libobs, providing a pathway from plugins to the UI. 54 | 55 | When generating an Alert, the following (bare minimum) parameters should be available: 56 | 57 | ```js 58 | { 59 | "timestamp": 1585817608, // An epoch timestamp of when the issue occurred. Generated automatically 60 | "priority": "CRITICAL", // Enum of CRITICAL, WARNING, INFO // Required? Could default to warning? 61 | "summary": "", // Required. HTML-anchor capable. A short summary of the issue, visible from the main window 62 | "description": "", // Optional. A lengthy, more informative HTML string providing more context and inline solutions. Might support newlines and dot points 63 | "persist": true, // Optional, default to false. If the user dismisses the alert, whether it should persist in the full list. Useful for alerts that are tied to a fixable issue. 64 | } 65 | ``` 66 | 67 | Creating this Alert should return a unique identifier, so that the plugin can keep track and control the status of an Alert. Alongside this, we should also provide an ability to tie Alerts to a specific source (scene? group?), property, and its valid state. That way the plugin relinquishes control of the Alert and it's dismissed as part of the property becoming valid, or the source being deleted/unloaded (when switching Scene collections or closing OBS). 68 | 69 | Could be designed similar to hotkeys in terms of API. Examples: `obs_add_source_notification` / `obs_add_frontend_notification` / `obs_add_output_notification`. 70 | 71 | Examples 72 | 73 | # How We Teach This 74 | 75 | On first launch, it'd be nice to show an initial bit of information via this system to introduce the user to OBS, and to explain the purpose of the Alert UI. 76 | 77 | For developers, making sure this is well documented is important. Knowing that they can reach out to the user within the UI when things go wrong is important. 78 | 79 | # Drawbacks 80 | 81 | Complexity. This will require a lot of code, logic, and a background runner keeping track of active/dismissed alerts. The solution to this may be the option to turn it off somehow for bigger productions that only need issues confirmed before going live. 82 | 83 | Visual noise. This one is less of a concern, but at the same time we don't want too much going on in the UI. A big toolbar along the top could be distracting, but at the same time going subtle means that the user could potentially not realise that something has gone wrong. 84 | 85 | # Additional Information 86 | 87 | Alerts shouldn't have a timeout, so that they're not missed by the user. 88 | 89 | There are many things to consider in this design. It's highly likely that the backend & UI would be developed & PR'd independently rather than together. This'd ensure a solid backend design, and an easier time reviewing it. 90 | 91 | The backend design & scope should be fleshed out significantly more before a real impelementation is started. 92 | 93 | # Alternatives 94 | 95 | - Exposing QSystemTrayIcon's `showMessage` - providing plugins a way to use native notifications. 96 | - Downsides include: 97 | - They will trigger a sound that the user cannot mute 98 | - Windows automatically disables these when a game is running (via Focus assist) 99 | - they are distracting and interrupt the user 100 | - Due to how they're visually exposed to the user, they could be abused by plugins ("annoying") 101 | - Upsides include: 102 | - easier to implement 103 | - will get the user's attention 104 | 105 | # Unresolved questions 106 | 107 | * Should Alerts have a list of methods to dismiss (scene collection switch, profile switch)? 108 | * Methods to dismiss would be quite limited in libobs, and would add unneeded complexity to the frontend. Methods to dismiss would instead be tied to sources, outputs, etc. 109 | * The focus of this current design is of the assumption that alerts are only shown in the main window. Often it makes more sense to show warnings/info within Properties themselves. I personally think the scope should be limited to the core alert system, and overhauling Properties to support many more things is a task for another day. Thoughts? 110 | * Rather than making the summary/description link/anchor capable, should we instead provide a "link" field, and auto generate a "Learn more" link at the end of the summary? The big downside of this is then only one link can be provifded, rather than providing extensibility and further information for the user. 111 | * Should this Alert API contact an OBS server to recieve alerts? Use case here would be if a service is having connectivity issues that is out of our control. This could explicitly be optional, maybe only enabled when the user links their account with a service. -------------------------------------------------------------------------------- /text/0049-multiple-video-mixes.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | 3 | Add the ability to render multiple "video mixes" where a "video mix" is ultimately a `render_texture` created by compositing scenes and sources that can be fed into one or more video encoders/outputs. 4 | 5 | 6 | # Motivation 7 | 8 | OBS supports multiple video outputs allowing simultaneous streaming, recording, virtual camera, etc. These outputs, however, must all be derived from a single `render_texture`. This makes it impossible to support [Multiple Video Outputs (Selective Recording, ISO recording)](https://ideas.obsproject.com/posts/41/multiple-video-outputs-selective-recording-iso-recording) or to share just a webcam via virtual camera (to participate in a video conference while streaming) with the current architecture. 9 | 10 | While plugins do exist to implement these features, they do so in a roundabout way. They provide a no-op filter which intercepts video from a source for the plugin to process. This completely bypasses the normal output pipeline of OBS. 11 | 12 | 13 | # Design 14 | 15 | ## Split `obs_core_video` 16 | 17 | Video rendering data is stored in a single instance of the `obs_core_video` struct internally. This struct contains shared resources like the `graphics_t` pointer and `gs_effect_t`s and state management for the main rendering thread, but it also contains data specific to the render such as `render_texture` and all its related copy/staging surfaces, gpu encoder thread management, and the `video_t` pointer. The first step will be to factor out those `render_texture` related fields into a new `obs_core_video_mix` struct. 18 | 19 | The initial implementation will bias toward leaving fields in `obs_core_video` whenever possible to limit the scope of the refactor. Settings like fps and resolution will be the same for all video mixes. Future work can be done to make these configurable per video mix, but that functionality is out of scope for the initial refactor. 20 | 21 | ## Multiple Renders per View 22 | 23 | The `obs_view` struct already supports up to 64 `source_t` channels of input to the render thread. All rendering of video, however, is done to the same texture (one on top of the other). Currently only 7 of those channels are used and only 1 for video (channel 0 is the primary video source and 1-6 are audio device sources). 24 | 25 | Adding more video sources to the existing `main_view` is an obvious way to support multiple video mixes. In order to get useable output textures, the render pipeline will be updated to use a different `obs_core_video_mix` struct and therefore a separate `render_texture` and related outputs for each channel with a video source set. Theres no real need to continue to support multiple sources rendering onto the same texture within a view since the same thing can (and is) implemented with scene or transition sources today. 26 | 27 | The `obs_core_video` struct will be updated to contain `MAX_CHANNELS` copies of the new `obs_core_video_mix`, one for each potential channel in the `main_view`. The main render thread will loop through each channel and render any source with video to its own dedicated `render_texture`. Each channel may also have separate encoders/outputs and a dedicated gpu thread as needed. 28 | 29 | ## Resource Management 30 | 31 | Additional video mixes each require their own dedicated graphics resources. This proposed design supports up to 64 mixes, but pre-allocating graphics resources when most channels will go unused would be wasteful. Instead, initialization for `render_texture` and related resources will occur during the first render loop where a channel has a `source_t` with video assigned. 32 | 33 | It may be possible to free resources early as well if the `source_t` is removed, but this introduces risks since those resources are shared with the gpu encoder and output threads and coordinating tear down might be challenging. It's not clear that early cleanup would provide much real-world benefit, so this optimization will likely be omitted from the initial implementation. 34 | 35 | 36 | # Proposed UX 37 | 38 | While this proposal focuses primarily on the `libobs` refactor required to enable multiple video mix functionality, here are some ideas on how the UI could be updated to take advantage: 39 | 40 | ## Simple Output Mode 41 | 42 | ![Simple](0049-multiple-video-mixes/output-simple.png) 43 | 44 | 45 | An additional "Virtual Camera" `GroupBox` could be added to the Output settings when in Simple mode to select the camera source. It would default to "Same as stream" but could be overridden to select any other source. If an override was configured, this source could be set to a static channel >6 which the virtual camera could use when starting. 46 | 47 | Configuring ISO recording is likely beyond the scope of what makes sense to include in the Simple configuration mode. 48 | 49 | ## Advanced Output Mode 50 | 51 | ![Advanced](0049-multiple-video-mixes/output-advanced.png) 52 | 53 | A new "Video" tab could be added to the Output settings when in Advanced mode. This would function similarly to the existing "Audio" tab where users could configure scenes/sources for each of N video tracks. Additionally, the virtual camera source selection would be shown here while in Advanced mode. This part of the configuration would simply map sources to channels very much like how the Advanced Audio Properties dialog enables mapping audio sources to different tracks. The only difference being that video tracks don't mix multiple sources directly, a scene must be used to setup this mixing separately if desired. 54 | 55 | The Recording tab could also be updated to configure ISO recording by selecting which video tracks to record (similar to selecting audio tracks to record except that each video would be written to a separate file). A basic version of this might simply allow selecting which video tracks to record and then duplicating the configured encoder under the hood to record any selected track with the same settings. A more advanced version might allow full configuration for each track with different encoder settings, separate audio track selections per video, etc. Either of these or other options can easily be implemented on top of the core `libobs` changes proposed above. 56 | 57 | 58 | # Alternatives 59 | 60 | ## Use Multiple `obs_view`s 61 | 62 | Instead of re-purposing the channels within the `main_view`, it would be possible to add similar multi-rendering support using separate `obs_view` objects for each. This ultimately would be a bigger refactor to the rendering thread and how sources are added/removed for rendering without any clear benefit vs the proposed approach. 63 | 64 | ## Use Source Filters 65 | 66 | There are several existing plugins which use source filters to intercept texture data and implement similar functionality. This bypasses most of the core render pipeline and would make it much more difficult to manage gpu encoder count limitations, consistent frame dropping, configuration, etc. 67 | 68 | 69 | # Drawbacks 70 | 71 | Deferring resource allocation could hide graphics failures that would have otherwise resulted in a fallback from DirectX to OpenGL. Not all resource creation is deferred, so this would only happen if some resources could be created but not all of them. If this ends up being an actual issue then the allocation for channel 0 resources (the default video channel) can be hoisted back into the main init call and treated specially relatively easily. 72 | 73 | Re-purposing the `obs_view` channels to render to separate textures removes the ability to perform the sort of layered rendering to a single texture that exists today. It does not seem like this functionality is actually being used, nor what use-case might require it that couldn't be similarly accomplished with a scene instead. 74 | 75 | Using a fixed-size channel list limits the available video mixes. Currently 64 channels are support (with 7 already in use), but realistically a PC is unlikely to be able to handle 58 separate video mixes with individual encoders and outputs all at once anyway. If there's a need to support more then future work could either make the channel count dynamic or simply increase the `MAX_CHANNELS` constant at the cost of a minimal amount of additional memory consumption. 76 | 77 | # Additional Information 78 | 79 | Plugins offering similar functionality: 80 | * [Source Record](https://obsproject.com/forum/resources/source-record.1285/) 81 | * [OBS Virtualcam](https://obsproject.com/forum/resources/obs-virtualcam.949/) and [Virtual Cam Filter](https://obsproject.com/forum/resources/virtual-cam-filter.1142/) 82 | 83 | Implementation: 84 | * [PR](https://github.com/obsproject/obs-studio/pull/6577) showing proposed implementation, tested locally by forcing a non-default source for the virtual camera. 85 | -------------------------------------------------------------------------------- /text/0045-add-notion-of-protocol.md: -------------------------------------------------------------------------------- 1 | # Summary 2 | - Define supported protocols for services and outputs in libobs and OBS Studio 3 | - Outputs for streaming are registered with one or more compatible protocols and codecs 4 | - Only codecs compatible with the output and the service are shown 5 | - Audio codec can be chosen if various compatible, codec in simple mode and encoder advanced output mode. 6 | - Will be extended with RFC 39 7 | 8 | # Motivation 9 | Create a better management of outputs, encoders, and services with an approach focused on protocols. 10 | 11 | # Design 12 | 13 | ## Outputs API 14 | Only outputs with the `OBS_OUTPUT_SERVICE` flag are concerned. 15 | 16 | Adding to `obs_output_info`: 17 | - `const char *protocols`: protocols separated with a semi-colon (ex: `"RTMP;RTMPS"`), required to register the output. The string used to identify the protocol must be its officialy/widely used acronym, those acronyms are usually uppercased. 18 | 19 | Adding to this API these functions: 20 | - `const char *obs_output_get_protocols(const obs_output_t *output)`: returns protocols supported by the output. 21 | - `bool obs_is_output_protocol_registered(const char *protocol)`: return true if an output with the protocol is registered. 22 | - `void obs_enum_output_types_with_protocol(const char *protocol, void *data, bool (*enum_cb)(void *data, const char *id))`: enumerate through a callback all outputs types compatible with the given protocol. 23 | - `bool obs_enum_output_protocols(size_t idx, const char **protocol)`: enumerate all registered protocol. 24 | - `const char *obs_get_output_supported_video_codecs(const char *id)`: return compatible video codecs of the given output id. 25 | - `const char *obs_get_output_supported_audio_codecs(const char *id)`: return compatible audio codecs of the given output id. 26 | 27 | ## Services API 28 | Since a streaming service may not accept all the codecs usable by a protocol, adding a way to set supported codecs is required. 29 | 30 | Adding to `obs_service_info`: 31 | - `const char *(*get_protocol)(void *data)`: return the protocol used by the service. RFC 39 will allow multi-protocol services in the future. 32 | - `const char **(*get_supported_video_codecs)(void *data)`: video codecs supported by the service. Optional, fallback to protocol supported codecs if not set. 33 | - `const char **(*get_supported_audio_codecs)(void *data)`: audio codecs supported by the service. Optional, fallback to protocol supported codecs if not set. 34 | 35 | Adding to this API these functions: 36 | - `const char *obs_service_get_protocol(const obs_service_t *service)`: return the protocol used by the service. 37 | - `const char **obs_service_get_supported_video_codecs(const obs_service_t *service)`: return video codecs compatible with the service. 38 | - `const char **obs_service_get_supported_audio_codecs(const obs_service_t *service)`: return audio codecs compatible with the service. 39 | 40 | ### Services and connection informations 41 | 42 | Depending on the protocol, the service object should provide various types of connection details (e.g., server URL, stream key, username, password). 43 | 44 | Currently, `get_key` can provide a stream id (SRT) or an encryption passphrase (RIST) rather than a stream key. 45 | 46 | Rather than adding a getter for each possible type of information, a getter where we can choose which type of information we want should be implemented. 47 | 48 | Types of connection details will be defined by an enum with only even number values. Odd number values will be reserved for potential third-party protocols (RFC 39). 49 | 50 | List of types: 51 | ```c 52 | enum obs_service_connect_info { 53 | OBS_SERVICE_CONNECT_INFO_SERVER_URL = 0, 54 | OBS_SERVICE_CONNECT_INFO_STREAM_ID = 2, 55 | OBS_SERVICE_CONNECT_INFO_STREAM_KEY = 2, 56 | OBS_SERVICE_CONNECT_INFO_USERNAME = 4, 57 | OBS_SERVICE_CONNECT_INFO_PASSWORD = 6, 58 | OBS_SERVICE_CONNECT_INFO_ENCRYPT_PASSPHRASE = 8, 59 | }; 60 | ``` 61 | 62 | Note: `OBS_SERVICE_CONNECT_INFO_STREAM_ID` is an alias of `OBS_SERVICE_CONNECT_INFO_STREAM_KEY` since they are technically the same. It was done to avoid confusion by forcing only one naming. 63 | 64 | Adding to `obs_service_info`: 65 | - `const char *(*get_connect_info)(void *data, uint32_t type)` where `type` is one of the value of the list above indicating which info we want and return `NULL` if it doesn't have it. 66 | - `bool (*can_try_to_connect)(void *data)`, since protocols and services do not always require the same information. This function allows the service to return if it can connect. Return true if the the service has all it needs to connect. 67 | 68 | Adding these functions to the Services API: 69 | - `const char *obs_service_get_connect_info(const obs_service_t *service, uint32_t type)`: return the connection information related to the given type (list above). 70 | - `bool obs_service_can_try_to_connect(const obs_service_t *service)`: return if the service can try to connect. 71 | - return `bool (*can_try_to_connect)(void *data)` result. 72 | - return true if `bool (*can_try_to_connect)(void *data)` is not implemented in the service. 73 | - return false if the service object is `NULL`. 74 | 75 | `obs_service_get_url()`, `obs_service_get_key()`, `obs_service_get_username()`, and `obs_service_get_password()` will be deprecated in favor of `obs_service_get_connect_info()`. 76 | 77 | In the future, if we want to support a protocol that doesn't use a server URL, this scenario is covered by this design. 78 | 79 | ### About `rtmp-services` 80 | 81 | `rtmp-services` is a plugin that provides two services `rtmp_common` and `rtmp_custom`. The use of "rtmp" in these three terms no longer means that it only supports RTMP, it was and is kept to avoid breakage. 82 | 83 | - In `rtmp_common`, services must at least support H264 as video codec and AAC or Opus as audio codec. This is a limitation required by the simple output mode. 84 | - NOTE: Some service provides RTMP and RTMPS which force the plugin to check the URI scheme to return the right protocol in this situation. 85 | 86 | #### About service JSON file 87 | 88 | - For `services` objects, a `"protocol"` field will be added. This will be used to indicate the protocol for services. 89 | - The field is required if server URLs are not RTMP(s) (`rtmp://`, `rtmps://`). 90 | - Services that use a protocol that is not registered will not be shown. For example, OBS Studio without RTMPS support will not show services and servers that rely on RTMPS. 91 | - Codecs field for audio and video will be added to allow services to limit which codec is compatible with the service. 92 | - `"output"` field in the `"recommended"` object will be deprecated, but it will be kept for backward compatibility. `const char *(*get_output_type)(void *data)` in `obs_service_info` will no longer be used by `rtmp-services`. 93 | - The JSON schema will be modified to require the `"protocol"` field when the protocol is not auto-detectable. The same for the `"output"` field to keep backward compatibility. 94 | 95 | ## UI 96 | 97 | ### About custom server 98 | 99 | The protocol of the custom server will be determined through the URI scheme (e.g., `rtmp`, `rtmps`, `ftl`, `srt`, `rist`) of the server URL and fallback to RTMP. 100 | Those checks will be hardcoded in the UI until RFC 39 replace custom server implementation. 101 | 102 | ### Audio encoders 103 | 104 | If the service/output is compatible with more than one audio codec, the user should be able to choose in the UI which one: 105 | - In simple mode, the user will be able to choose the codec (AAC or Opus). 106 | - In advanced mode, the user will be able to choose a specific encoder. 107 | 108 | ### Settings loading order 109 | 110 | Service settings need to be loaded first and output ones afterward to show only compatible encoder. 111 | 112 | ### Service and Output settings interactions 113 | 114 | NOTE: Since Services API is not usable by third-party for now, this RFC will not consider a scenario where there is no simple encoder preset for the service. 115 | 116 | If changing the service results in the selected protocol changing, the Output page should be updated to list only compatible encoders. 117 | 118 | If the currently selected encoder (codec) is not supported by the service, the encoder will be changed and the user will be notified about the change. 119 | 120 | ## What if there are various registered outputs for one protocol? 121 | 122 | The improbable situation where a plugin register a output for an already registered protocol could happen, so managing this possibility is required. 123 | 124 | - In `obs_service_info`: 125 | - `const char *(*get_output_type)(void *data)` will be renamed `const char *(*get_preferred_output_type)(void *data)` when an API break is possible. 126 | - `const char *obs_service_get_output_type(const obs_service_t *service)` will be deprecated and replaced by `const char *obs_service_get_preferred_output_type(const obs_service_t *service)`. 127 | 128 | A function in the UI to prefer first-party outputs will be considered. 129 | An option in advanced output to allow to choose an output type will not be considered for now. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OBS Project RFCs 2 | 3 | [OBS Project RFCs]: #obs-project-rfcs 4 | 5 | Many changes, including bug fixes and documentation improvements, can be 6 | implemented and reviewed via the normal GitHub pull request workflow. 7 | 8 | Some changes though are "substantial", and we ask that these be put through a 9 | bit of a design process and produce a consensus among the OBS Project 10 | community. 11 | 12 | The "RFC" (request for comments) process is intended to provide a consistent 13 | and controlled path for new features, functionality, and changes to undergo 14 | a peer-review and allow all interested parties to weigh in their comments 15 | and ensure the direction is consistent with the vision of the project. 16 | 17 | 18 | ## Table of Contents 19 | [Table of Contents]: #table-of-contents 20 | 21 | - [Opening](#obs-project-rfcs) 22 | - [Table of Contents] 23 | - [When you need to follow this process] 24 | - [Before creating an RFC] 25 | - [What the process is] 26 | - [The RFC life-cycle] 27 | - [Reviewing RFCs] 28 | - [Implementing an RFC] 29 | - [RFC Postponement] 30 | - [Help this is all too informal!] 31 | - [License] 32 | 33 | 34 | ## When you need to follow this process 35 | [When you need to follow this process]: #when-you-need-to-follow-this-process 36 | 37 | You need to follow this process if you intend to make "substantial" changes to 38 | OBS Studio, obs-browser, or any of the other plugins/modules. What constitutes a 39 | "substantial" change is evolving based on community norms and varies depending 40 | on what part of the ecosystem you are proposing to change, but may include the 41 | following. 42 | 43 | - Entirely new features/functionality that are not currently available 44 | - Major changes to existing features, or requests to redesign those features 45 | - Changes to the internal engineering design of any part of the program 46 | - Things that drastically change the end-user experience, both from a 47 | workflow and UX perspective. 48 | - Changes to API, ABI, or settings storage format of the program 49 | 50 | Some changes do not require an RFC: 51 | 52 | - Rephrasing, reorganizing, refactoring, or otherwise "changing shape does 53 | not change meaning". 54 | - Additions that strictly improve objective, numerical quality criteria 55 | (warning removal, speedup, better platform coverage, more parallelism, trap 56 | more errors, etc.) 57 | - Additions only likely to be _noticed by_ other developers, and are 58 | invisible to end users. 59 | - rtmp-service updates 60 | 61 | If you submit a pull request to implement a new feature without going through 62 | the RFC process, it may be closed with a polite request to submit an RFC first. 63 | If you are unsure if your change requires an RFC, just ask! 64 | 65 | ## Before creating an RFC 66 | [Before creating an RFC]: #before-creating-an-rfc 67 | 68 | A hastily-proposed RFC can hurt its chances of acceptance. Low quality 69 | proposals, proposals for previously-rejected features, or those that don't fit 70 | into the projects near or long-term goals, may be quickly rejected, which can 71 | be demotivating for the unprepared contributor. Laying some groundwork ahead of 72 | the RFC can make the process smoother. 73 | 74 | Although there is no single way to prepare for submitting an RFC, it is 75 | generally a good idea to pursue feedback from other project developers 76 | beforehand, to ascertain that the RFC may be desirable; having a consistent 77 | impact on the project requires concerted effort toward consensus-building. 78 | 79 | The most common preparations for writing and submitting an RFC include talking 80 | the idea over on our [official Discord server], or discussing the topic on our 81 | [developer discussion forum]. 82 | 83 | As a rule of thumb, receiving encouraging feedback from long-standing project 84 | developers is a good indication that the RFC is worth pursuing. 85 | 86 | 87 | ## What the process is 88 | [What the process is]: #what-the-process-is 89 | 90 | In short, to get a major feature added to OBS, one must first get the RFC 91 | merged into the RFC repository as a markdown file. At that point, the RFC is 92 | "active" and may be implemented with the goal of eventual inclusion into OBS. 93 | It is recommended that no work start on implementation until an RFC is accepted. 94 | 95 | - Fork the RFC repo [RFC repository] 96 | - Copy `0000-template.md` to `text/0000-my-feature.md`. Make sure the title 97 | is descriptive. RFC number should match the next PR number. 98 | - Fill in the RFC. Put care into the details: RFCs that do not present 99 | convincing motivation, demonstrate a lack of understanding of the design's 100 | impact, or are disingenuous about the drawbacks or alternatives tend to 101 | be poorly-received. 102 | - Submit a pull request to the [RFC repository]. As a pull request the RFC 103 | will receive design feedback from the larger community, and the author 104 | should be prepared to revise it in response. 105 | - Build consensus and integrate feedback. RFCs that have broad support are 106 | much more likely to make progress than those that don't receive any 107 | comments. Feel free to reach out to the project developers to get 108 | help identifying issues and obstacles. 109 | - The project team will discuss the RFC pull request, as much as possible in the 110 | comment thread of the pull request itself. Offline discussion will be 111 | summarized on the pull request comment thread. 112 | - RFCs rarely go through this process unchanged, especially as alternatives 113 | and drawbacks are shown. You can make edits, big and small, to the RFC to 114 | clarify or change the design, but make changes as new commits to the pull 115 | request, and leave a comment on the pull request explaining your changes. 116 | Specifically, do not squash or rebase commits after they are visible on the 117 | pull request. 118 | - At some point, a member of the team will propose a "motion for final 119 | comment period" (FCP), along with a *disposition* for the RFC (merge, close, 120 | or postpone). 121 | - This step is taken when enough of the tradeoffs have been discussed that 122 | the team is in a position to make a decision. That does not require 123 | consensus amongst all participants in the RFC thread (which is usually 124 | impossible). However, the argument supporting the disposition on the RFC 125 | needs to have already been clearly articulated, and there should not be a 126 | strong consensus *against* that position outside of the team. Team 127 | members use their best judgment in taking this step, and the FCP itself 128 | ensures there is ample time and notification for interested parties to push 129 | back if it is made prematurely. 130 | - For RFCs with lengthy discussion, the motion to FCP is usually preceded by 131 | a *summary comment* trying to lay out the current state of the discussion 132 | and major tradeoffs/points of disagreement. 133 | - The FCP lasts ten calendar days so that it is open for at least 5 business 134 | days. This way all parties have a chance to lodge any final objections 135 | before a decision is reached. 136 | - In most cases, the FCP period is quiet, and the RFC is either merged or 137 | closed. However, sometimes substantial new arguments or ideas are raised, 138 | the FCP is canceled, and the RFC goes back into development mode. 139 | 140 | ## The RFC life-cycle 141 | [The RFC life-cycle]: #the-rfc-life-cycle 142 | 143 | Once an RFC becomes "active" then authors may implement it and submit the 144 | feature as a pull request to the OBS Project (or appropriate module) repo. Being 145 | "active" is not a rubber stamp, and in particular still does not mean the feature 146 | will ultimately be merged; it does mean that in principle all the major interested 147 | parties have agreed to the feature and are amenable to merging it. 148 | 149 | Furthermore, the fact that a given RFC has been accepted and is "active" 150 | implies nothing about what priority is assigned to its implementation, nor does 151 | it imply anything about whether an OBS Project developer has been assigned the 152 | task of implementing the feature. While it is not *necessary* that the author 153 | of the RFC also write the implementation, it is by far the most effective way 154 | to see an RFC through to completion: authors should not expect that other project 155 | developers will take on responsibility for implementing their accepted feature. 156 | 157 | Modifications to "active" RFCs can be done in follow-up pull requests. We 158 | strive to write each RFC in a manner that it will reflect the final design of 159 | the feature; but the nature of the process means that we cannot expect every 160 | merged RFC to actually reflect what the end result will be at the time of the 161 | next major release. 162 | 163 | In general, once accepted, RFCs should not be substantially changed. Only very 164 | minor changes should be submitted as amendments. More substantial changes 165 | should be new RFCs, with a note added to the original RFC. Exactly what counts 166 | as a "very minor change" is up to the core team to decide. 167 | 168 | 169 | ## Reviewing RFCs 170 | [Reviewing RFCs]: #reviewing-rfcs 171 | 172 | While the RFC pull request is up, the team may reach out to the author to 173 | discuss the issues in greater detail, and in some cases, the topic may be 174 | discussed at an internal team meeting. In either case, a summary from the 175 | resulting discussions will be posted back to the RFC pull request. 176 | 177 | The core team makes final decisions about RFCs after the benefits and drawbacks 178 | are well understood. These decisions can be made at any time, but the team 179 | will regularly issue decisions. When a decision is made, the RFC pull request 180 | will either be merged or closed. In either case, if the reasoning is not clear 181 | from the discussion in the thread, the team will add a comment describing the 182 | rationale for the decision. 183 | 184 | 185 | ## Implementing an RFC 186 | [Implementing an RFC]: #implementing-an-rfc 187 | 188 | Some accepted RFCs represent vital features that need to be implemented right 189 | away. Other accepted RFCs can represent features that can wait until some 190 | arbitrary developer feels like doing the work. Every accepted RFC has an 191 | associated issue tracking its implementation in the relevant OBS Project 192 | repository; thus that associated issue can be assigned a priority via the 193 | triage process that the team uses for all issues in the relevant repository. 194 | 195 | The author of an RFC is not obligated to implement it. Of course, the RFC 196 | author (like any other developer) is welcome to post an implementation for 197 | review after the RFC has been accepted. 198 | 199 | If you are interested in working on the implementation for an "active" RFC, but 200 | cannot determine if someone else is already working on it, feel free to ask 201 | (e.g. by leaving a comment on the associated issue or asking in the 202 | [official Discord server]). 203 | 204 | 205 | ## RFC Postponement 206 | [RFC Postponement]: #rfc-postponement 207 | 208 | Some RFC pull requests are tagged with the "postponed" label when they are 209 | closed (as part of the rejection process). An RFC closed with "postponed" is 210 | marked as such because we want neither to think about evaluating the proposal 211 | nor about implementing the described feature until some time in the future, and 212 | we believe that we can afford to wait until then to do so. Postponed pull 213 | requests may be re-opened when the time is right. There is no formal process 214 | for when a postponed RFC is reopened. We recommend asking us directly about 215 | any postponed RFCs in our [official Discord Server]. 216 | 217 | Usually, an RFC pull request marked as "postponed" has already passed an 218 | informal first round of evaluation, namely the round of "do we think we would 219 | ever possibly consider making this change, as outlined in the RFC pull request, 220 | or some semi-obvious variation of it." (When the answer to the latter question 221 | is "no", then the appropriate response is to close the RFC, not postpone it.) 222 | 223 | 224 | ### Help this is all too informal! 225 | [Help this is all too informal!]: #help-this-is-all-too-informal 226 | 227 | The process is intended to be as lightweight as reasonable for the present 228 | circumstances. As usual, we are trying to let the process be driven by 229 | consensus and community norms, not impose more structure than necessary. 230 | 231 | 232 | [official Discord server]: https://obsproject.com/discord 233 | [developer discussion forum]: https://obsproject.com/forum/list/general-development.21/ 234 | [RFC repository]: http://github.com/obsproject/rfcs 235 | 236 | ## License 237 | [License]: #license 238 | 239 | This repository is currently licensed under: 240 | 241 | * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) 242 | 243 | ### Contributions 244 | 245 | Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the MIT license, without any additional terms or conditions. 246 | --------------------------------------------------------------------------------