├── Diagrams ├── Coordinate Systems.png ├── DeferredRender.png ├── IndirectLighting.png ├── Parallel CommandList RHI.png ├── PrePass.png ├── RDG_Aliasing.png ├── RDG_CommandDependencyTree.png ├── RDG_CommandQueue.png ├── RDG_PassResourceLifetime.png ├── RDG_ResourceTransition.png ├── RDG_SettingShaderParams.png ├── RDG_ShaderParameterBinding.png ├── RDG_Stages.png ├── RHIUnrealDiagram.png ├── SceneTextureStruct.png ├── ShadowProjection.png ├── Triangle Render.png ├── TriangleOutput.png └── VulkanPipeLine.png ├── Fundamentals of Graphics Programming.md ├── Preface.md ├── README.md ├── References.md ├── Render Dependency Graph (RDG).md ├── Render Hardware Interface (RHI).md ├── Triangle Shader.md └── Unreal Engine Rendering Pipeline.md /Diagrams/Coordinate Systems.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/Coordinate Systems.png -------------------------------------------------------------------------------- /Diagrams/DeferredRender.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/DeferredRender.png -------------------------------------------------------------------------------- /Diagrams/IndirectLighting.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/IndirectLighting.png -------------------------------------------------------------------------------- /Diagrams/Parallel CommandList RHI.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/Parallel CommandList RHI.png -------------------------------------------------------------------------------- /Diagrams/PrePass.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/PrePass.png -------------------------------------------------------------------------------- /Diagrams/RDG_Aliasing.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/RDG_Aliasing.png -------------------------------------------------------------------------------- /Diagrams/RDG_CommandDependencyTree.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/RDG_CommandDependencyTree.png -------------------------------------------------------------------------------- /Diagrams/RDG_CommandQueue.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/RDG_CommandQueue.png -------------------------------------------------------------------------------- /Diagrams/RDG_PassResourceLifetime.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/RDG_PassResourceLifetime.png -------------------------------------------------------------------------------- /Diagrams/RDG_ResourceTransition.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/RDG_ResourceTransition.png -------------------------------------------------------------------------------- /Diagrams/RDG_SettingShaderParams.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/RDG_SettingShaderParams.png -------------------------------------------------------------------------------- /Diagrams/RDG_ShaderParameterBinding.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/RDG_ShaderParameterBinding.png -------------------------------------------------------------------------------- /Diagrams/RDG_Stages.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/RDG_Stages.png -------------------------------------------------------------------------------- /Diagrams/RHIUnrealDiagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/RHIUnrealDiagram.png -------------------------------------------------------------------------------- /Diagrams/SceneTextureStruct.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/SceneTextureStruct.png -------------------------------------------------------------------------------- /Diagrams/ShadowProjection.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/ShadowProjection.png -------------------------------------------------------------------------------- /Diagrams/Triangle Render.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/Triangle Render.png -------------------------------------------------------------------------------- /Diagrams/TriangleOutput.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/TriangleOutput.png -------------------------------------------------------------------------------- /Diagrams/VulkanPipeLine.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/staticJPL/Render-Dependency-Graph-Documentation/ddcdfb37eb27647ec997a53a837945b638e9a423/Diagrams/VulkanPipeLine.png -------------------------------------------------------------------------------- /Fundamentals of Graphics Programming.md: -------------------------------------------------------------------------------- 1 | # Fundamentals of Graphics Programming 2 | 3 | ## Graphics Pipeline 4 | 5 | In order to grasp the content of this document a basic understanding of a graphics rendering pipeline is needed. Below is an example of the Vulkan Graphics 6 | Pipeline. You can skip this section if you know already about the graphics pipeline. 7 | 8 | ![[Unreal Engine Render Dependency Graph/Diagrams/VulkanPipeLine.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/08e9c57045b6a88b7918499961a3c2e2e23e83ff/Diagrams/VulkanPipeLine.png) 9 | 10 | There are many types of graphics pipelines used across various platforms. Some examples include Vulkan, DirectX, and OpenGL. All pipelines share common steps, such as the Input Assembler, Vertex Shader, Rasterization, and Fragment Shader (also known as the pixel shader). Some APIs have different steps before or after the Rasterizer that may optimize the pipeline for specific operations on that platform. 11 | 12 | `Step 1.` The input assembler collects the raw vertex data from the buffers you specify and, to the best of my knowledge, in combination with shader code like HLSL, allows you to bind to input semantics, which I will explore later on. 13 | 14 | `Step 2.` The Vertex shader is run for every vertex and generally applies transformations to convert vertex positions from model space to screen space. It also passes per-vertex data down the pipeline. Generally, before performing any Matrix Transformations from the vertex to world space, local space, or screen space, the vertex data will be interpreted in the standard Clip Space. 15 | 16 | ![[Unreal Engine Render Dependency Graph/Diagrams/Coordinate Systems.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/08e9c57045b6a88b7918499961a3c2e2e23e83ff/Diagrams/Coordinate%20Systems.png) 17 | 18 | `Step 3.` The Geometry shader is run on every primitive (triangle, line, point) and can discard it or output more primitives than came in. This is similar to the tessellation shader, but much more flexible. However, it is not used much in today's applications, reiterating that some APIs are different. 19 | 20 | `Step 4.` The Rasterization stage discretizes the primitives into fragments. These are the pixel elements that they fill on the framebuffer. Any fragments that fall outside the screen are discarded, and the attributes outputted by the vertex shader are interpolated across the fragments, as shown in the figure. Usually, the fragments that are behind other primitive fragments are also discarded here because of depth testing. Depth testing is used alongside a depth buffer, which holds info for the culling process. From my understanding, the Rasterizer is the only place in the pipeline that’s not programmable. 21 | 22 | `Step 5.` Fragment Shader or the Pixel shader, as Unreal calls it. The pixel shader is invoked for every fragment that survives the rasterization stage and determines which framebuffer(s) the fragments are written to along with which color and depth values. It can do this using the interpolated data from the vertex shader, which can include things like texture coordinates and normals for lighting. If interpolation is confusing, imagine this example scenario between the vertex shader and pixel shader. 23 | 24 | Suppose you had two vertex values A & B. Both vertices contain a unique 2D screen position and a 3D pixel color. Now if vertex A was a point on the screen with 25 | a color value of Red and vertex B was another point with the color value blue and you drew a line connecting A to B you’d have something like this. 26 | Red(VA)——Purple—-Blue(B), The vertex shader would draw the line connecting A to B. Then pass that off to the Pixel shader which would interpolate the color 27 | between A to B. Point A is purely red and Point B is purely blue giving us a middle color of purple (which is a mix of Red and blue). 28 | 29 | `Step 6`. API specific operations can be seen before the Pixel Shader hands off the final image to the framebuffer. 30 | 31 | ### GPU Buffers 32 | 33 | 34 | The key thing to understand is that a buffer is just a resource stored in memory on the GPU. These resources are declared on the CPU and then mem copied to the GPU to be used in various rendering processes. Alignment is important in most cases when defining data on the CPU before passing it to the GPU, so be prepared to encounter discussions about padding and alignment when interfacing with Graphics APIs. There are many kinds of buffers, but some of the most common ones are listed below. 35 | 36 | **Vertex Buffer** - Holds data that defines all vertices. The input assembler will then read this and bind it to the vertex shader specified in the GPU Pipeline. 37 | 38 | **Index Buffer** - The Index buffer is an array of pointers to vertices in the vertex buffer. The index buffer usually reads three vertices at a time. However, you can specify offsets. This allows us to create multiple combinations from the vertex buffer to feed to the vertex shader. We can define different draw calls to use, such as screen space only quads, triangles, or other primitives. 39 | 40 | **Command Buffer** - Command buffers are used to record the commands required to render a frame. These commands include tasks such as setting up the viewport, binding shaders, textures, and issuing draw calls to render the geometry. Once the command buffer is recorded, it can be submitted to the GPU for execution. 41 | 42 | **Depth Buffer** - The depth buffer (also known as the Z-buffer) is a memory buffer used in 3D graphics to store the depth information for each pixel on the screen. It is used to determine the relative depth of the objects in a scene and to ensure that objects closer to the viewer are drawn on top of objects further away. 43 | 44 | **GBuffer** - The GBuffer, short for Geometry Buffer, is a set of multiple off-screen render targets used to store various information about the geometry in a 3D scene. This information can include things like depth, normals, albedo (color), specular, and other data. The GBuffer is typically used in deferred shading, a technique that separates the process of capturing the geometry of a scene from the process of applying lighting and shading to that geometry. 45 | 46 | **Texture Buffer** - A texture buffer is a GPU buffer used to store texture data. A texture is a 2D or 3D image applied to the surfaces of 3D models to add visual detail and realism to a scene. Texture buffers store the pixel data for textures, including color, alpha, and normal information. 47 | 48 | ### HLSL 49 | 50 | In Unreal Engine, shader code is written using HLSL and has the file extension types .usf and .ush. 51 | 52 | I like to imagine the HLSL like assembly code. Where you have Input registers and Output registers. You define your inputs from your vertex shader and it’s 53 | outputs as inputs to your pixel shader. Regular semantics are used to define the meaning of input and output data for a particular stage of the pipeline. This 54 | could include things like position, color, normal, and texture coordinates. You can call these “Regular Semantics” or anything like but you will need to bind 55 | resources (Buffers) to them using the Input Assembler as illustrated earlier. 56 | 57 | “System semantics”, on the other hand, are used to define the meaning of input and output data that is specific to a particular platform or API. These semantics 58 | are used to define how the data is passed between the GPU and the API, such as the DirectX or OpenGL API. 59 | 60 | For example, you would use the "SV_VERTEXID" semantic to indicate that a particular input variable contains the vertex ID, which is a system semantic that is 61 | specific by DirectX. This semantic is tied to a DirectX specific draw call where the SV_VERTEXID semantic is incremented based on the current vertex being 62 | processed and the vertex buffer the vertex shader is using. 63 | 64 | Below is the Triangle HLSL code we will be binding too to draw our triangle later on. 65 | 66 | ```cpp 67 | // TriangleVS Binable Name for entry point for a custom vertex shader 68 | void TriangleVS( 69 | in float2 InPosition : ATTRIBUTE0, // First Input Bindable Regular Symantic 70 | in float4 InColor : ATTRIBUTE1, // Second Input Bindable Regular Symantic 71 | out float4 OutPosition : SV_POSITION, // System Symantic Ouput position to Pixel Shader 72 | out float4 OutColor : COLOR0 // System Symantic Output Color to Pixel Shader 73 | ) 74 | { 75 | OutPosition = float4(InPosition, 0, 1); 76 | OutColor = InColor; 77 | } 78 | // TrianglePS Bindable entry point for a custom Pixel Shader 79 | void TrianglePS( 80 | in float4 InPosition : SV_POSITION, //System Symantic Input position to Pixel Shader 81 | in float4 InColor : COLOR0, // System Symantic Input Color to Pixel Shader 82 | { 83 | out float4 OutColor : SV_Target0) // System Symantic Render Target, IE. The 2D texture resource Render to. 84 | OutColor = InColor; 85 | } 86 | ``` 87 | -------------------------------------------------------------------------------- /Preface.md: -------------------------------------------------------------------------------- 1 | Preface: The goal of this document is to highlight the rendering pipeline used inside Unreal Engine 5.1. The paper will outline some basics of the graphics 2 | pipeline and how Unreal interfaces with many Graphics APIs to perform rendering. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Rendering & Graphics Programming with Unreal Engine 2 | 3 | Preface: The goal of this document is to highlight the rendering pipeline used inside Unreal Engine 5.1. The paper will outline some basics of the graphics pipeline and how Unreal interfaces with many Graphics APIs to perform rendering. 4 | 5 | ### Tables of Contents: 6 | 1. [Fundamentals of Graphics Programming](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/38befa753f23766c3c26196799a320c90a42b5e7/Fundamentals%20of%20Graphics%20Programming.md) 7 | 2. [Unreal Engine Rendering Pipeline](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/38befa753f23766c3c26196799a320c90a42b5e7/Unreal%20Engine%20Rendering%20Pipeline.md) 8 | 3. [Render Dependency Graph](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/38befa753f23766c3c26196799a320c90a42b5e7/Render%20Dependency%20Graph%20(RDG).md) 9 | 4. [SceneView Extenson Triangle Shader](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/38befa753f23766c3c26196799a320c90a42b5e7/Triangle%20Shader.md) 10 | 5. [References](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/38befa753f23766c3c26196799a320c90a42b5e7/References.md) 11 | 6. [Triangle Example Code](https://github.com/staticJPL/TriangleViewExtensionExample) 12 | 13 | -------------------------------------------------------------------------------- /References.md: -------------------------------------------------------------------------------- 1 | # References 2 | 3 | In order for me piece this all together it took a bit of time digging through Engine Source code and googling articles. In this section I will link the references I 4 | bookmarked and put them in the order of learning. This way you can review this document in order of the references which build upon the knowledge you need 5 | to understand rendering in Unreal and basics for Graphic Programming. 6 | 7 | ### Basic Graphics Programming References 8 | -------------------------------------------------- 9 | **DirectX Pipeline** 10 | https://www.youtube.com/watchv=pfbWt1BnPIo&list=PLqCJpWy5Fohd3S7ICFXwUomYW0Wv67pDD&index=21&ab_channel=ChiliTomatoNoodle 11 | 12 | **DirectX Architecture/Swap Chain** 13 | https://www.youtube.com/watch?v=bMxNN9dO4cI&list=PLqCJpWy5Fohd3S7ICFXwUomYW0Wv67pDD&index=19&ab_channel=ChiliTomatoNoodle 14 | 15 | **Vulkan Uniform Buffers** 16 | https://www.youtube.com/watch?v=may_GMkfs5k&t=607s&ab_channel=BrendanGalea 17 | 18 | **Vulkan Index and Staging Buffers** 19 | https://www.youtube.com/watch?v=qxuvQVtehII&ab_channel=BrendanGalea 20 | 21 | **Deferred Rendering** 22 | https://www.youtube.com/watch?v=n5OiqJP2f7w&ab_channel=BenAndrew 23 | 24 | Shader Basics 25 | --------------------------------------------------------------------------- 26 | https://www.youtube.com/watch?v=kfM-yu0iQBk&ab_channel=FreyaHolm%C3%A9r 27 | 28 | HLSL Semantics 29 | https://learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-semantics 30 | 31 | Unreal Engine Passes & Rendering Basics 32 | --------------------------------------------------------------------- 33 | 34 | Unreal Engines Render Passes 35 | https://unrealartoptimization.github.io/book/profiling/passes/ 36 | 37 | Unreal Engine Rendering (Drawing Policies are Deprecated) 38 | https://medium.com/@lordned/unreal-engine-4-rendering-overview-part-1-c47f2da65346 39 | 40 | Unreal Engine Vertex Factories 41 | (Basically how you’re suppose to create vertex buffers, there are macros for this which you pass to RDG) 42 | 43 | https://medium.com/realities-io/creating-a-custom-mesh-component-in-ue4-part-1-an-in-depth-explanation-of-vertex-factories-4a6fd9fd58f2 44 | 45 | Unreal Engine RHI 46 | ---------------------------------------------- 47 | Analysis of Unreal Engine Rendering System: (Chinese) timlly-chang 48 | https://www.cnblogs.com/timlly/p/15156626.html#101-%E6%9C%AC%E7%AF%87%E6%A6%82%E8%BF%B0 49 | 50 | RDG Architecture by Riccardo Loggini 51 | https://logins.github.io/graphics/2021/05/31/RenderGraphs.html 52 | 53 | RDG Analysis (Chinese) 54 | https://blog.csdn.net/qjh5606/article/details/118246059 55 | https://juejin.cn/post/7085216072202190856 56 | 57 | Shaders with RDG 58 | https://logins.github.io/graphics/2021/03/31/UE4ShadersIntroduction.html 59 | 60 | Render Architecture 61 | https://ikrima.dev/ue4guide/graphics-development/render-architecture/base-usf-shaders-code-flow/ 62 | 63 | Epic Games RDG Crash Course 64 | https://epicgames.ent.box.com/s/ul1h44ozs0t2850ug0hrohlzm53kxwrz 65 | 66 | Epic Games RDG Documentation 67 | https://docs.unrealengine.com/5.1/en-US/render-dependency-graph-in-unreal-engine/ 68 | 69 | Scene View Extension Class 70 | ------------------------------------------ 71 | Forum Post 72 | https://forums.unrealengine.com/t/using-sceneviewextension-to-extend-the-rendering-system/600098 73 | 74 | Caius’ Blog Global Shaders using Scene View Extension Class 75 | https://itscai.us/blog/post/ue-view-extensions/ 76 | -------------------------------------------------------------------------------- /Render Dependency Graph (RDG).md: -------------------------------------------------------------------------------- 1 | # Render Dependency Graph (RDG) 2 | 3 | In 2017 Yuriy O’Donnell pioneered a render graph system while working for Frostbite and presented the first Frame Graph at GDC. Consequently the series of 4 | advantages this system provided was taken into Unreal Engine and as of 2021 the use of the render graph has become the standard in AAA game engine 5 | development. 6 | 7 | Riccardo Loggini's work below describes in detail how the Render Dependency Graph operates. 8 | 9 | ![[Unreal Engine Render Dependency Graph/Diagrams/RDG_CommandQueue.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/563954a23906d392d55727b1132446abdd73d0dd/Diagrams/RDG_CommandQueue.png) 10 | 11 | ### Properties 12 | 13 | 14 | The Render Dependency Graph abstracts render operations into a concise form for generating render code. This approach enhances code clarity and facilitates debuggability, enabling tools to interpret resource lifetimes and render pass dependencies effectively, thereby reducing development time. 15 | 16 | The next generation of Graphics APIs such as DX12 and Vulkan manage resource states transitions depending on operations performed. 17 | 18 | ![[Unreal Engine Render Dependency Graph/Diagrams/RDG_ResourceTransition.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/563954a23906d392d55727b1132446abdd73d0dd/Diagrams/RDG_ResourceTransition.png) 19 | 20 | Using render graphs allows operations to be handled automatically without manual input. As seen in the diagram above, a graphics programmer can declare 21 | what resources are needed for their shader input. Since Resource transitions are handled by the graph, you can visualize the 22 | green lines as “read” and red lines as “write” operations. Each render graph node has knowledge of these operations and allows it to place `Barriers` for 23 | resource transitions. This means that optimal barriers placed ensure there is an optimal command queue setup. So if `resource A` is used as a shader resource for 24 | `Pass 1` but as a render target for `pass 2` then you will still need a resource transition to render between these two targets. This reduces calls and saves memory 25 | allocation. 26 | 27 | ![[Unreal Engine Render Dependency Graph/Diagrams/RDG_PassResourceLifetime.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/563954a23906d392d55727b1132446abdd73d0dd/Diagrams/RDG_PassResourceLifetime.png) 28 | 29 | In the image above, for example, `resource A` is used only up to the third pass. On the other hand, `resource C` starts getting used in the fourth pass, and so its 30 | lifetime does not overlap `resource A`, meaning we have reuse of the same memory for both resources. 31 | 32 | The same concept applies for resource A and D. In general there will be multiple ways to overlap our memory allocations, so we will also need clever ways to detect the best allocation strategy. 33 | 34 | 35 | ### RDG Resources 36 | 37 | There are resources that are used on “Per-frame” basis: which are technically called graph or transient resources since their lifetime can be fully handled by the 38 | render graph. Some examples of “Per-Frame” resources are `Gbuffers` and Camera Depth which are deferred in the lighting pass. 39 | 40 | 41 | `Transient` resources are intended for a render graph to exist for a specific duration within a single frame, offering significant potential for memory reuse. There are other resources used externally and dependent on other resources, such as a window swapchain back buffer. In this case, the graph will limit itself to managing their state, known as `external resources`. 42 | 43 | ### Transient Resource System 44 | 45 | The lifetime of transient resources can have what’s called “resource aliasing” for these resources (according to DX12 terminology). 46 | 47 | ![[Unreal Engine Render Dependency Graph/Diagrams/RDG_Aliasing.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/563954a23906d392d55727b1132446abdd73d0dd/Diagrams/RDG_Aliasing.png) 48 | 49 | Aliased resources can spare no more than 50% of the used resource allocation space, especially when using a render graph. They add an additional managing 50 | resource complexity to the scene, but if we want to spare memory, it's almost always worth it. 51 | 52 | ### Build Cross-Queues Synchronization 53 | 54 | ![[Unreal Engine Render Dependency Graph/Diagrams/RDG_CommandDependencyTree.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/563954a23906d392d55727b1132446abdd73d0dd/Diagrams/RDG_CommandDependencyTree.png) 55 | 56 | Lastly the graph allows us to use multiple command queues to run them in parallel. A dependency tree can aid in synchronizinng this mechanism to prevent 57 | race conditions on shared resources. 58 | 59 | An acyclic graph of render passes emerges after laying down every pass from the dependent queues. Each level of the tree is referred to as a dependency level and is comprised of passes independent of each with other in terms of their resource usage. This arrangement ensures that every pass in the same dependency level can potentially run asynchronously. While it's possible to have multiple passes belonging to the same queue in the same dependency level, this does not pose an issue. 60 | 61 | As a consequence, a synchronization point with a GPU fence at the end of every dependency level can execute the needed resource transitions for every queue on a single graphics command list. This approach of course does not come for free, since using fences and syncing different command queues 62 | has a time cost. In addition to that, it will not always be the optimal and smallest amount of synchronizations, but it will produce acceptable performance to 63 | should cover all the possible edge cases. 64 | 65 | This highlights the advantages of the Render Graph in a nutshell. 66 | 67 | - Better resource management, 68 | - Easier debugging tools 69 | - Parallel command list 70 | - Synchronization. 71 | 72 | ### RDG Dynamics 73 | 74 | Some new terminology needs to be addressed on top of what we already seen in the context of RDG. 75 | 76 | - **View**: A single “viewport” looking at the FScene. When playing in split screen or rendering left and right eye in VR for example will contain have two views (Inside a view family). 77 | 78 | - **Vertex Factory**: A class that encapsulating vertex data to link as an input to a vertex shader. There are different vertex factory types depending on the kind of mesh we are rendering. 79 | 80 | - **Pooled Resource**: A graphics resource created and handled by the RDG. Their availability is guaranteed during RDG passes execution only. 81 | 82 | - **External Resource**: A graphics resource created independently from the RDG. 83 | 84 | The workflow of Unreal Engines RDG can be Identified in a 3 step process: 85 | 86 | **Setup phase**: Declares which render passes will exist and what resources will be accessed by them. 87 | 88 | **Compile phase**: Figures out the resources lifetime to make resource allocations accordingly. 89 | 90 | **Running/Execute phase**: All graph nodes get executed. 91 | 92 | ![[Unreal Engine Render Dependency Graph/Diagrams/RDG_Stages.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/563954a23906d392d55727b1132446abdd73d0dd/Diagrams/RDG_Stages.png) 93 | 94 | ### Setup Stage 95 | 96 | The setup stage begins inside` FRenderModule` and is triggered by only the render threads main function. This builds passes for the visible views and all 97 | objects associated with them. 98 | 99 | ```cpp 100 | FDeferredShadingSceneRenderer::Render(FRHICommandListImmediate& RHICmdList) 101 | ``` 102 | 103 | The generic syntax for all RDG Passes will be created in this Generic Case 104 | 105 | ```cpp 106 | // Instantiate the resources we need for our pass 107 | FShaderParameterStruct* PassParameters = GraphBuilder.AllocParameters<FShaderParameterStruct>(); 108 | // Fill in the pass parameters 109 | PassParameters->MyParameter = GraphBuilder.CreateSomeResource(MyResourceDescription, TEXT("MyResourceName")); 110 | // Define pass and add it to the RDG builder 111 | GraphBuilder.AddPass( RDG_EVENT_NAME("MyRDGPassName"), PassParameters, ERDGPassFlags:: 112 | Raster, [PassParameters, OtherDataToCapture](FRHICommandList&RHICmdList) { 113 | // … pass logic here, render something! ... 114 | } 115 | ``` 116 | 117 | - **Pass Name**: This will ultimately be represented by an object of type FRDGEventName containing the description of the pass. It is used for debugging and profiling tools. 118 | 119 | - **Pass Parameters**: This object is expected to derive from a Shader Parameter Struct which has to be created with GraphBuilder.AllocParameters() and needs to be defined with the macro `BEGIN_SHADER_PARAMETER_STRUCT(FMyShaderParameters, `) The PassParameters will need to distinguish between at least a shader resources and render targets in order to detect proper transitions. (more on this subject in the Shader Parameters) and it can come from either: 120 | - A shader uniform buffer proper to a shader (e.g. in the case we have only one shader like a compute shader) and so in this case the PassParameters will be of type FMyShader::FParameters. 121 | - A generically defined shader uniform buffer, which will usually be defined in the source (.cpp) file. The usual name of these buffers will include“PassParameters” to specify that they are used for a whole pass instead of a single shader. 122 | 123 | - **Pass Flags**: Set of flags of type ERDGPassFlags, they are mainly used to specify the kind of operations we will be doing inside the pass, e.g. raster, copy and compute. 124 | 125 | - **Lambda Function**: This will contain the “body” of our pass, and so the logic to execute at run time. With the lambda we can capture any number of objects we want to later use to set up our rendering operations. Remember, after collection, they are not executed immediately and will be delayed but there are situations where it can be immediate. 126 | 127 | ### Compile Phase 128 | 129 | The `compile` phase is completely autonomous and a “non-programmable” stage, in the sense that the render pass programmer does not have influence on it. 130 | In this phase the graph gets inspected to find all the possible flow optimizations, it will: 131 | 132 | 1. Exclude unreferenced but defined resources and passes: if we want to draw a second debug view of the scene, we might be interested to draw only certain passes for it. 133 | 2. Compute and handle used resources lifetime. 134 | 3. Resources Allocation. 135 | 4. Build optimized resource transition graph. 136 | 137 | ### Running Stage 138 | 139 | The Running Stage describes the time when the lambda function of an RDG pass gets executed. This will happen asynchronously and the exact 140 | moment is completely up to the RDG. 141 | 142 | When the lambda body executes, the available input will be the variables captured by the lambda and a command list (either `RHIComputeCommandList&` for 143 | `Compute `/ `AsyncCompute` workloads or `FRHICommandList&` for raster operations). 144 | 145 | What essentially happens inside the lambda body is the following 146 | 147 | - Set a pipeline state object, e.g. setting rasterizer, blend and depth/stencil states. 148 | - Set shaders and their attributes. 149 | - Selects what shaders to use and binds them to the current pipeline. 150 | - Defines parameters which means binding resources to the shader slots on the current command list. 151 | - Send copy/draw/dispatch commands and send render commands to the command list. 152 | 153 | ### Execute Phase 154 | 155 | Execution phase is as simple as navigating through all the passes that survived the compile phase culling and executing the draw and dispatch commands on the 156 | list. Up until the execute phase all the resources were handled by opaque and abstract references, while at execute phase we access the real GPU API resources 157 | and set them in the pipeline. The preparation of command lists, on the CPU side, can be potentially parallelized quite a lot: in most of the cases, each pass 158 | command list setup is independent from each other. Aside from that, command list submissions on a single command queue is not thread safe, and in any case 159 | we would first need to determine if adding parallelization would bring significant gains. 160 | 161 | #### Shader Types 162 | 163 | The base class for shaders is FShader, but we find two main types of shaders that we can use: 164 | - FGlobalShader: all the shaders deriving from it are part of the global shaders group. A global shader produces a single instance across the engine, and it can only use global parameters. 165 | - FMaterialShader: all the derived classes are the ones that use parameters tied to materials. If they also use parameters tied to the vertex factory, then the class to refer to is FMeshMaterialShader. 166 | 167 | Note* I’ll be using the Global Shader since I rendered inside the post processing pass. The Material shaders are heavily tied to a vertex factory, which I didn’t 168 | have time to cover and or experiment with. I will provide some resources for that in the reference section if you’re interested. 169 | 170 | ### Shader Parameters 171 | 172 | ![[Unreal Engine Render Dependency Graph/Diagrams/RDG_ShaderParameterBinding.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/563954a23906d392d55727b1132446abdd73d0dd/Diagrams/RDG_ShaderParameterBinding.png) 173 | 174 | The shader parameters are the objects that are gonna identify the resource slots used by a shader. These parameters are used when setting resources for a 175 | graphics compute or computer operation. 176 | 177 | The process of setting a shader parameter will consist in binding a resource to the command list at the index specified by the shader parameter. 178 | 179 | **Note*** if binding and context is confusing, this is because an understanding a lower level GPU and graphics API knowledge is probably missing. To address this 180 | I’ve given a brief explanation in the supplemental explanation section. 181 | 182 | We have the following types of Shader Parameters, as seen in `ShaderParametersUtils.h` and `ShaderParameters.h`: 183 | 184 | - **FShaderParameter**: shader parameter’s register binding. e.g. float1/2/3/4, can be an array, UAV. 185 | 186 | - **FShaderResourceParameter**: shader resource binding (textures or samplerstates). 187 | 188 | - **FRWShaderParameter**: class that binds either a UAV or SRV resource. 189 | 190 | - **TShaderUniformBufferParameter**: shader uniform buffer binding with a specific structure (templated). This parameter references a struct which contains all the resources defined for a specific shader. More info later. 191 | 192 | As many cases in Unreal Engine, shader parameter classes use macros to define how they are composed. The most important part of the shader parameters is its 193 | Layout, an internal variable defined at compile time that specifies its structure, composed of `Layout Fields`. 194 | 195 | ```cpp 196 | LAYOUT_FIELD(MyDataType, MyFiledName); 197 | LAYOUT_FIELD(FShaderResourceParameter, UAVParameter); 198 | ``` 199 | 200 | The way the layout will be used depends on the type of shader parameters and it can contain any data (e.g. a parameter index). Its purpose is always to hold 201 | information about shader parameters (e.g. CBVs, SRVs, UAVs in D3D12) so that we can use them to bind to resources at the moment of executing the shader in 202 | the command list. 203 | 204 | ### Shader Uniform Buffer Parameter 205 | 206 | The concept of `Uniform Buffer Parameter` in Unreal Engine is very different from what we are used to in standard computer graphics: here it is essentially 207 | defined as a struct of shader parameters. 208 | 209 | Uniform Buffers, as previously mentioned in the RDG chapter, can be defined using a Shader Parameter Struct macro, either inside a shader class declaration or 210 | in global scope. 211 | 212 | ```cpp 213 | BEGIN_SHADER_PARAMETER_STRUCT(FMyShaderParameters, ) 214 | SHADER_PARAMETER_RDG_TEXTURE(Texture2D, InputTexture) 215 | SHADER_PARAMETER_SAMPLER(SamplerState, InputSampler) 216 | RENDER_TARGET_BINDING_SLOTS() 217 | END_SHADER_PARAMETER_STRUCT() 218 | ``` 219 | 220 | This family of macros is very flexible and it can contain: 221 | 222 | - **Shader Parameters**: textures, samplers, buffers and descriptors. For a full list of macros reference to ShaderParameterMacros.h. 223 | - **Nested Structs**: we can encapsulate the definition of a shader parameter struct into another. This is achieved using the macro `SHADER_PARAMETER_STRUCT(StructType,MemberName)` and `SHADER_PARAMETER_STRUCT_ARRAY(..)`.` 224 | - **Binding Slots**: Available with the macro `RENDER_TARGET_BINDING_SLOTS()` which adds an array of assignable Render Targets to the parameter struct we use. 225 | 226 | **Usage** 227 | - If defined outside a shader, the name “FMyShaderParameters” will usually be `F` PassParameters. These shader parameter structs are used as Pass Parameters for the RDG, as described in the RDG section of this article. 228 | - If defined inside a shader class, the name “FMyShaderParameters” will usually be `FParameters`. We will also need to use this macro at the top of the shader class `SHADER_USE_PARAMETER_STRUCT(FMyShaderClass, FBaseClassFromWhatMyShaderDerivesFrom)`. 229 | 230 | **Set Shader Parameters** 231 | 232 | Most of the times when using a shader inside an RDG pass you can call 233 | ```cpp 234 | SetShaderParameters(TRHICmdList& RHICmdList, const TShaderRef<TShaderClass>& Shader, TShaderRHI* ShadeRHI, const typename 235 | TShaderClass::FParameters& Parameters) 236 | ``` 237 | 238 | ![[Unreal Engine Render Dependency Graph/Diagrams/RDG_SettingShaderParams.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/563954a23906d392d55727b1132446abdd73d0dd/Diagrams/RDG_SettingShaderParams.png) 239 | 240 | from ShaderParameterStruct.h will be called to bind input resources to a specific shader. 241 | 242 | The function will first call `ValidateShaderParameters(Shader, Parameters);` to check that all the input shader resources cover all the expected shader parameters. 243 | Then it will start to bind all the resources to the relative parameters: every parameter type listed at the beginning of this section (e.g. FShaderParameter, 244 | FShaderResourceParameter, etc.) will have their own call for getting bound to the command list. A scheme of when we set a uniform buffer resource is as follows: 245 | 246 | **BufferIndex** is used for all the `FParameterStructReference`, but also for the basic `FParameters` elements, since they are stored in buffers as well. 247 | What happens inside CmdList::SetUniformBuffer with such input parameters is completely up to the render platform we are using and it varies a lot from case to 248 | case. 249 | 250 | ### Shader Macro Usage & Setup 251 | 252 | **Local Parameters** 253 | 254 | Starting with some Parameters, if we want to generate our own Uniform Buffer (Constant Buffer used by many shaders). Let’s show an example between HLSL 255 | declarations and our Macro Setup 256 | 257 | ```cpp 258 | float2 ViewPortSize; 259 | float4 Hello; 260 | float World; 261 | float3 FooBarArray[16]; 262 | Texture2D BlueNoiseTexture; 263 | SamplerState BlueNoiseSampler; 264 | // Note Sampler States are objects that we used to "Sample a texture" basically reading 265 | // the sample if we want to do masking, blending or other render wizardry. 266 | Texture2D SceneColorTexture; 267 | SamplerState SceneColorSampler; 268 | RWTexture2D SceneColorOutput; 269 | ``` 270 | 271 | as explained in the previous section we need to use an internal macro so we can bind these parameters. 272 | 273 | ```cpp 274 | BEGIN_SHADER_PARAMETER_STRUCT(FMyShaderParameters,) 275 | SHADER_PARAMETER(FVector2f,ViewPortSize) 276 | SHADER_PARAMETER(FVector4f,Hello) 277 | SHADER_PARAMETER(float,World) 278 | SHADER_PARAMETER(FVector3f,FooBarArray,[16]) 279 | SHADER_PARAMETER_TEXTURE(Texture2D,BlueNoiseTexture) 280 | SHADER_PARAMETER_TEXTURE(SamplerState,BlueNoiseSampler) 281 | SHADER_PARAMETER_TEXTURE(Texture2D,SceneColorTexture) 282 | SHADER_PARAMETER_TEXTURE(SamplerState,SceneColorSampler) 283 | SHADER_PARAMETER_UAV(Texture2D,SceneColorTexture) 284 | END_SHADER_PARAMETER_STRUCT() 285 | ``` 286 | 287 | The macro `SHADER_PARAMETER_STRUCT` will fill in all the data internally to generate reflective data at compile time. 288 | 289 | ```cpp 290 | const FShaderParametersMetadata* ParametersMetadata = FShaderParameters::FTypeInfo::GetStructMetadata(); 291 | ``` 292 | 293 | **Alignment Requirements** 294 | 295 | You need to conform to alignment. Unreal Adopts the principle of 16-byte automatic alignment, thus the order of any members matters when declaring that 296 | struct. 297 | 298 | The main rule is that each member is aligned to the next power of its size, but only if it larger that 4 Bytes. For example: 299 | 300 | Pointers are 8-byte aligned 301 | - float,uint32,int32 are 4-byte aligned 302 | - FVector2f, FIntPoint is 8-byte aligned 303 | - FVector and FVector4f are 16-byte aligned 304 | 305 | if you don't follow this alignment engine with throw an assert statement at compile time. 306 | 307 | Automatic alignment of each member will be inevitably create padding, as indicated below: 308 | 309 | ```cpp 310 | BEGIN_SHADER_PARAMETER_STRUCT(FMyShaderParameters,) 311 | SHADER_PARAMETER(FVector2f,ViewPortSize) // 2 x 4 bytess 312 | // 2 x 4 byte of padding 313 | SHADER_PARAMETER(FVector4f,Hello) // 4 x 4 bytes 314 | SHADER_PARAMETER(float,World) // 1 x 4 bytes 315 | // 3 x 4 byte of padding 316 | SHADER_PARAMETER(FVector3f,FooBarArray,[16]) // 4 x 4 x 16 bytes 317 | SHADER_PARAMETER_TEXTURE(Texture2D,BlueNoiseTexture) // 8 bytes 318 | SHADER_PARAMETER_TEXTURE(SamplerState,BlueNoiseSampler) // 8 bytes 319 | SHADER_PARAMETER_TEXTURE(Texture2D,SceneColorTexture) // 8 bytes 320 | SHADER_PARAMETER_TEXTURE(SamplerState,SceneColorSampler) // 8 bytes 321 | SHADER_PARAMETER_UAV(Texture2D,SceneColorTexture) // 8 bytes 322 | END_SHADER_PARAMETER_STRUCT() 323 | ``` 324 | 325 | ```cpp 326 | SHADER_PARAMETER(FVector3f,WorldPositionAndRadius) // DONT DO THIS 327 | // ----------------------- // 328 | // Do this 329 | SHADER_PARAMETER(FVector, WorldPosition) // Good 330 | SHADER_PARAMETER(float,WorldRadius) // Good 331 | SHADER_PARAMETER_ARRAY(FVector4f,WorldPositionRadius,[16]) // Good 332 | ``` 333 | 334 | **Binding the shader** 335 | 336 | After we’ve setup the shader parameters and alignment is all good declare it with 337 | 338 | ```cpp 339 | SHADER_USE_PARAMETER_STRUCT(FMyShaderCS,FGlobalShader) 340 | ``` 341 | 342 | ```cpp 343 | class FMyShaderCS : public FGlobalShader 344 | { 345 | DECLARE_GLOBAL_SHADER(FMyShaderCS); 346 | SHADER_USE_PARAMETER_STRUCT(FMyShaderCS,FGlobalShader) 347 | static bool ShouldCompilePermutation(const FGlobalShaderPermutationParameters& Parameters) { 348 | return true; 349 | } 350 | using FParameters = FMyShaderParameters; 351 | } 352 | // Additionally we could inline the struct definition like this 353 | class FMyShaderCS : public FGlobalShader 354 | { 355 | DECLARE_GLOBAL_SHADER(FMyShaderCS); 356 | SHADER_USE_PARAMETER_STRUCT(FMyShaderCS,FGlobalShader) 357 | static bool ShouldCompilePermutation(const FGlobalShaderPermutationParameters& Parameters) 358 | { 359 | return true; 360 | } 361 | using FParameters = FMyShaderParameters; 362 | BEGIN_SHADER_PARAMETER_STRUCT(FMyShaderParameters,) 363 | SHADER_PARAMETER(FVector2f,ViewPortSize) // 2 x 4 bytess 364 | // 2 x 4 byte of padding 365 | SHADER_PARAMETER(FVector4f,Hello) // 4 x 4 bytes 366 | SHADER_PARAMETER(float,World) // 1 x 4 bytes 367 | // 3 x 4 byte of padding 368 | SHADER_PARAMETER(FVector3f,FooBarArray,[16]) // 4 x 4 x 16 bytes 369 | SHADER_PARAMETER_TEXTURE(Texture2D,BlueNoiseTexture) // 8 bytes 370 | SHADER_PARAMETER_TEXTURE(SamplerState,BlueNoiseSampler) // 8 bytes 371 | SHADER_PARAMETER_TEXTURE(Texture2D,SceneColorTexture) // 8 bytes 372 | SHADER_PARAMETER_TEXTURE(SamplerState,SceneColorSampler) // 8 bytes 373 | SHADER_PARAMETER_UAV(Texture2D,SceneColorTexture) // 8 bytes 374 | END_SHADER_PARAMETER_STRUCT() 375 | } 376 | ``` 377 | 378 | ```cpp 379 | // Setting the Parameters in the C++ code before passing it off to the Lambda Function 380 | FMyShaderParameters* PassParameters = GraphBuilder.AllocParameters(); 381 | 382 | PassParameters.ViewPortSize = View.ViewRect.Size(); 383 | PassParameters.World = 1.0f; 384 | PassParameters.FooBarArray[4] = FVector(1.0f,0.5f,0.5f); 385 | 386 | // Get access to the shader we class we declared Global 387 | TShaderMapRef ComputeShader(View.Shadermap); 388 | RHICmdList.SetComputeShader(ShaderRHI); 389 | 390 | // Setup your Shader Parameters 391 | SetShaderParameters(RHICmdList,*ComputeShader,ComputeShader->GetComputeShader(),Parameters); 392 | RHICmdList.DispatchComputeShader(GroupCount.X,GroupCount.Y,GroupCount.Z); 393 | ``` 394 | 395 | **Global Uniform Buffer** 396 | 397 | Process is the same with some small differences 398 | 399 | ```cpp 400 | BEGIN_GLOBAL_SHADER_PARAMETER_STRUCT(FSceneTextureUniformParameters,/*Blah_API*/) 401 | // Scene Color / Depth 402 | SHADER_PARAMETER_TEXTURE(Texture2D, SceneColorTexture) 403 | SHADER_PARAMETER_SAMPLER(SamplerState,SceneColorTextureSampler) 404 | SHADER_PARAMETER_TEXTURE(Texture2D, SceneColorTexture) 405 | SHADER_PARAMETER_SAMPLER(SamplerState,SceneDepthTextureSampler) 406 | SHADER_PARAMETER_TEXTURE(Texture2D, SceneDepthTexturesNonMS) 407 | //GBuffer 408 | SHADER_PARAMETER_TEXTURE(Texture2D, GBufferATexture) 409 | SHADER_PARAMETER_TEXTURE(Texture2D, GBufferBTexture) 410 | SHADER_PARAMETER_TEXTURE(Texture2D, GBufferCTexture) 411 | SHADER_PARAMETER_TEXTURE(Texture2D, GBufferDTexture) 412 | SHADER_PARAMETER_TEXTURE(Texture2D, GBufferETexture) 413 | SHADER_PARAMETER_TEXTURE(Texture2D, GBufferVelocityTexture) 414 | // ... 415 | END_GLOBAL_SHADER_PARAMETER_STRUCT() 416 | ``` 417 | 418 | After the struct setup you need to call the implement macro, following the string is the real name defined in the HLSL file. 419 | 420 | ```cpp 421 | IMPLEMENT_GLOBAL_SHADER_PARAMETER_STRUCT(FSceneTexturesUniformParameters,"SceneTextureStruct"); 422 | ``` 423 | 424 | Now inside the Unreal system, Common.ush will refer to the code generated, you will see this Common.ush get included in a lot of HLSL files. There are other 425 | includes that you can use that have some useful functions for render code. 426 | 427 | Now the uniform buffer we set can be accessed anywhere 428 | ```cpp 429 | // Generated file that contains the unifrom buffer declarations that we need to compile the shader want 430 | #include "/Engine/Generated/GeneratedUniformBuffers.ush" 431 | ``` 432 | 433 | ![[Unreal Engine Render Dependency Graph/Diagrams/SceneTextureStruct.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/563954a23906d392d55727b1132446abdd73d0dd/Diagrams/SceneTextureStruct.png) 434 | 435 | Now reference our uniform buffer inside our Parameter struct 436 | 437 | ```cpp 438 | BEGIN_SHADER_PARAMETER_STRUCT(FParameters,) 439 | //... 440 | // Here we ref our unifor buffer 441 | SHADER_PARAMETER_STRUCT_REF(FViewUniformShaderParameters,ViewUniformBuffer) 442 | END_SHADER_PARAMETER_STRUCT() 443 | ``` 444 | 445 | Again setup the parameter in C++ before passing into the Lambda Function 446 | 447 | ```cpp 448 | FMyShaderParameters* PassParameters = GraphBuilder.AllocParameters(); 449 | PassParameters.ViewPortSize = View.ViewRect.Size(); 450 | PassParameters.World = 1.0f; 451 | PassParameters.FooBarArray[4] = FVector(1.0f,0.5f,0.5f); 452 | PassParameters.ViewUniformBuffer = View.ViewUniformBuffer; 453 | ``` 454 | 455 | -------------------------------------------------------------------------------- /Render Hardware Interface (RHI).md: -------------------------------------------------------------------------------- 1 | # Render Hardware Interface (RHI) 2 | 3 | The original RHI was designed based on the D3D11 API, including some resource management and command interfaces. Since Unreal Engine is a ubiquitous 4 | tool that supports many platforms like mobile, console and PC in which then can use (DirectX, Vulkan, OpenGL, Metal). To address this Unreal abstracted an 5 | interface between all these API’s so they could keep the Rendering code as comprehensive as possible. 6 | 7 | This achieved using different render threads as shown below: 8 | 9 | ![[Unreal Engine Render Dependency Graph/Diagrams/RHIUnrealDiagram.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/e97260a557e345d37c2bb0b6352c82d35a4138df/Diagrams/RHIUnrealDiagram.png) 10 | 11 | We have a Game thread, Render thread and RHI Thread. The important thing to understand is that anything that’s rendered has a twin object between game 12 | thread and the render thread aside from some other specific cases. 13 | 14 | **Game Thread** 15 | - Primitive Components 16 | - Light Components 17 | 18 | **Render Thread** 19 | - Primitive Proxy 20 | - Light Proxy 21 | 22 | **RHI Thread** 23 | - Translates RHI “immediate” instructions from the Rendering thread to GPU based on the API specified. Note RHI Immediate really means immediate it’s not the same as a regular RHI command which is usually deferred. 24 | - DX12, Vulkan and Host support parallelism and if a RHI immediate instruction is a generated parallel command then the RHI thread will translate it in parallel. 25 | 26 | ![[Unreal Engine Render Dependency Graph/Diagrams/Parallel CommandList RHI.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/e97260a557e345d37c2bb0b6352c82d35a4138df/Diagrams/Parallel%20CommandList%20RHI.png) 27 | 28 | ## Basics of RHI 29 | 30 | **FRenderResource** 31 | 32 | The `FRenderResource` is a rendering resource representation on the rendering thread. This resource is managed and passed by the rendering thread as the 33 | intermediate data between the game thread and 34 | 35 | ```cpp 36 | /** 37 | * A rendering resource which is owned by the rendering thread. 38 | * NOTE - Adding new virtual methods to this class may require stubs added to FViewport/FDummyViewport, otherwise certain 39 | modules may have link errors 40 | */ 41 | class RENDERCORE_API FRenderResource 42 | { 43 | public: 44 | //////////////////////////////////////////////////////////////////////////////////// 45 | // The following methods may not be called while asynchronously initializing / releasing render resources. 46 | /** Release all render resources that are currently initialized. */ 47 | static void ReleaseRHIForAllResources(); 48 | /** Initialize all resources initialized before the RHI was initialized. */ 49 | static void InitPreRHIResources(); 50 | /** 51 | * Initializes the dynamic RHI resource and/or RHI render target used by this resource. 52 | * Called when the resource is initialized, or when reseting all RHI resources. 53 | * Resources that need to initialize after a D3D device reset must implement this function. 54 | * This is only called by the rendering thread. 55 | */ 56 | virtual void InitDynamicRHI() {} 57 | /** 58 | * Releases the dynamic RHI resource and/or RHI render target resources used by this resource. 59 | * Called when the resource is released, or when reseting all RHI resources. 60 | * Resources that need to release before a D3D device reset must implement this function. 61 | * This is only called by the rendering thread. 62 | */ 63 | virtual void ReleaseDynamicRHI() {} 64 | /** 65 | * Initializes the RHI resources used by this resource. 66 | * Called when entering the state where both the resource and the RHI have been initialized. 67 | * This is only called by the rendering thread. 68 | */ 69 | virtual void InitRHI() {} 70 | /** 71 | * Releases the RHI resources used by this resource. 72 | * Called when leaving the state where both the resource and the RHI have been initialized. 73 | * This is only called by the rendering thread. 74 | */ 75 | virtual void ReleaseRHI() {} 76 | /** 77 | * Initializes the resource. 78 | * This is only called by the rendering thread. 79 | */ 80 | virtual void InitResource(); 81 | /** 82 | * Prepares the resource for deletion. 83 | * This is only called by the rendering thread. 84 | */ 85 | virtual void ReleaseResource(); 86 | /** 87 | * If the resource's RHI resources have been initialized, then release and reinitialize it. Otherwise, do nothing. 88 | * This is only called by the rendering thread. 89 | */ 90 | void UpdateRHI(); 91 | (...) 92 | }; 93 | ``` 94 | 95 | There are many subclasses that inherit from this class so that the rendering thread can transfer data and operations of the game thread to the RHI thread at 96 | different levels of abstraction. 97 | 98 | **FRHIResource** 99 | 100 | `FRHIResource` is used for reference counting, delayed deletion, tracking, runtime data and marking. `FRHIResource` can be divided into state blocks, shader 101 | bindings, shaders, pipeline states, buffers, textures, views, and other miscellaneous items. It should be noted that we can create platform specific types with this class. Check the source for `FRHIUniformBuffer`. 102 | 103 | ```cpp 104 | /** The base type of RHI resources. */ 105 | class RHI_API FRHIResource 106 | { 107 | public: 108 | UE_DEPRECATED(5.0, "FRHIResource(bool) is deprecated, please use FRHIResource(ERHIResourceType)") 109 | FRHIResource(bool InbDoNotDeferDelete=false) 110 | : ResourceType(RRT_None) 111 | , bCommitted(true) 112 | #if RHI_ENABLE_RESOURCE_INFO 113 | , bBeingTracked(false) 114 | #endif 115 | { 116 | } 117 | FRHIResource(ERHIResourceType InResourceType) 118 | : ResourceType(InResourceType) 119 | , bCommitted(true) 120 | #if RHI_ENABLE_RESOURCE_INFO 121 | , bBeingTracked(false) 122 | #endif 123 | { 124 | #if RHI_ENABLE_RESOURCE_INFO 125 | BeginTrackingResource(this); 126 | #endif 127 | } 128 | virtual ~FRHIResource() 129 | { 130 | check(IsEngineExitRequested() || CurrentlyDeleting == this); 131 | check(AtomicFlags.GetNumRefs(std::memory_order_relaxed) == 0); // this should not have any outstanding refs 132 | CurrentlyDeleting = nullptr; 133 | #if RHI_ENABLE_RESOURCE_INFO 134 | EndTrackingResource(this); 135 | #endif 136 | } 137 | FORCEINLINE_DEBUGGABLE uint32 AddRef() const 138 | {...}; 139 | private: 140 | // Separate function to avoid force inlining this everywhere. Helps both for code size and performance. 141 | inline void Destroy() const 142 | {...}; 143 | public: 144 | FORCEINLINE_DEBUGGABLE uint32 Release() const 145 | {...}; 146 | FORCEINLINE_DEBUGGABLE uint32 GetRefCount() const 147 | {...}; 148 | static int32 FlushPendingDeletes(FRHICommandListImmediate& RHICmdList); 149 | static bool Bypass(); 150 | bool IsValid() const 151 | {...}; 152 | void Delete() 153 | {...}; 154 | inline ERHIResourceType GetType() const { return ResourceType; } 155 | #if RHI_ENABLE_RESOURCE_INFO 156 | // Get resource info if available. 157 | // Should return true if the ResourceInfo was filled with data. 158 | virtual bool GetResourceInfo(FRHIResourceInfo& OutResourceInfo) const 159 | {...}; 160 | static void BeginTrackingResource(FRHIResource* InResource); 161 | static void EndTrackingResource(FRHIResource* InResource); 162 | static void StartTrackingAllResources(); 163 | static void StopTrackingAllResources(); 164 | #endif 165 | private: 166 | class FAtomicFlags 167 | { 168 | static constexpr uint32 MarkedForDeleteBit = 1 << 30; 169 | static constexpr uint32 DeletingBit = 1 << 31; 170 | static constexpr uint32 NumRefsMask = ~(MarkedForDeleteBit | DeletingBit); 171 | std::atomic_uint Packed = { 0 }; 172 | public: 173 | int32 AddRef(std::memory_order MemoryOrder) 174 | {...}; 175 | int32 Release(std::memory_order MemoryOrder) 176 | {...}; 177 | bool MarkForDelete(std::memory_order MemoryOrder) 178 | {...}; 179 | bool UnmarkForDelete(std::memory_order MemoryOrder) 180 | {...}; 181 | bool Deleteing() 182 | {...}; 183 | mutable FAtomicFlags AtomicFlags; 184 | const ERHIResourceType ResourceType; 185 | uint8 bCommitted : 1; 186 | #if RHI_ENABLE_RESOURCE_INFO 187 | uint8 bBeingTracked : 1; 188 | 189 | #endif 190 | static std::atomic*> PendingDeletes; 191 | static FHazardPointerCollection PendingDeletesHPC; 192 | static FRHIResource* CurrentlyDeleting; 193 | // Some APIs don't do internal reference counting, so we have to wait an extra couple of frames before deleting resources 194 | // to ensure the GPU has completely finished with them. This avoids expensive fences, etc. 195 | struct ResourcesToDelete 196 | {...}; 197 | }; 198 | ``` 199 | 200 | **FRHICommandList** 201 | 202 | The RHI Command list is an instruction queue that is used to manage and execute a group of command objects. The parent class for this is 203 | FRHICommandListBase. `FRHICommandListBase` defines the basic data (Command list, Device context) and interface (Command refresh, wait, enqueue, 204 | memory allocation etc..) which is required by the command queue. FRHIComputeCommandList defines the interfaces between compute shaders, state transition of GPU resources and the settings for the shader parameters. FRHICommandList defines the interface of common rendering pipelines, these include binding `Vertex Shaders`, `Pixel Shaders`,` Geometry Shaders`, Primitive Drawing, shader parameters, resource management etc. 205 | 206 | **RHIContext & DynamicRHI** 207 | 208 | Lastly the `RHIContext` and `DynamicRHI` is another interface class which defines a set of graphics API related operations. As mentioned earlier some APIs can 209 | process commands in parallel and so this separate object is used to define that. 210 | 211 | In summary, the RHI classes are the lowest level abstraction used in Unreal to talk to our Graphics APIs. The ones listed are the main ones you should be aware 212 | of. I’ve provided an in depth article that goes through the RHI in more detail under the references section. Also if the command lists doesn’t make sense just 213 | look up how a basic command list/buffer is processed to the GPU. You can look at one of the Graphics API’s to see how it’s queued. 214 | 215 | -------------------------------------------------------------------------------- /Triangle Shader.md: -------------------------------------------------------------------------------- 1 | # Scene View Extension Triangle Shader 2 | 3 | ![[Unreal Engine Render Dependency Graph/Diagrams/Triangle Render.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/5110a92b8c25e1eab6f28e73456570649c2d0470/Diagrams/Triangle%20Render.png) 4 | 5 | With this knowledge the next step is to put it into practice. The “Hello World” of graphics programming is to draw a basic triangle using the Vertex Shader and 6 | Pixel Shader. Drawing a triangle in Unreal Engine encapsulates the process for rendering anything in the Engine itself. 7 | 8 | This section will cover how to create a render pass to draw this triangle in a step by step tutorial using the Render Dependency Graph in Unreal engine. 9 | 10 | ## Plugin/Module & Shader Folder Setup 11 | 12 | Using a Plugin or Module depends on your use case but the important thing to know is that both “Startup Module” or “Initialize” functions in the Plugin or 13 | Module allow you to run code before Unreal Engine is fully initialized. The reason this matters is because Unreal’s Renderer is a Module. If you don’t understand 14 | Modules and the Engine Life Cycle, I recommend doing a little reading on it to give yourself a better understanding of the runtime linking of modules. The 15 | Renderer is linked at runtime so shader code is compiled right before the editor starts up. 16 | 17 | **Plugin Setup** 18 | 19 | Go ahead and create a basic C++ project in Unreal Engine, First Person shooter will suffice. 20 | Once you’ve create the project, navigate to your folder structure and add a Shader Folder. 21 | Here is a rough example of what it looks like 22 | 23 | ``` 24 | ├── YourProjectName 25 | ………├──> YourProjectName.Build.cs 26 | ………├──> Source Folder 27 | ………└──> ... 28 | ├──> Plugins 29 | …………├──> Your Plugin Folder 30 | ………….……..└──> Shaders (Create the Folder here) 31 | ………….…….…….└──> .usf Files go here 32 | …………………├──> Source (Create the Folder here) 33 | ……………………………└──>Private 34 | ……………………………└──>Public 35 | ……………………………└──>YourPluginName.Build.cs 36 | ……….└──> YourPluginName.uplugin 37 | ``` 38 | 39 | 40 | Go to the PluginName.Build.cs file and ensure the dependencies are there. 41 | 42 | ```cpp 43 | using UnrealBuildTool; 44 | 45 | public class YourPluginName : ModuleRules 46 | { 47 | public YourPluginName(ReadOnlyTargetRules Target) : base(Target) 48 | { 49 | PCHUsage = ModuleRules.PCHUsageMode.UseExplicitOrSharedPCHs; 50 | 51 | PublicIncludePaths.AddRange( 52 | new string[] { 53 | // ... add public include paths required here ... 54 | EngineDirectory + "/Source/Runtime/Renderer/Private" 55 | } 56 | ); 57 | 58 | PrivateIncludePaths.AddRange( 59 | new string[] { 60 | // ... add other private include paths required here ... 61 | } 62 | ); 63 | 64 | PublicDependencyModuleNames.AddRange( 65 | new string[] 66 | { 67 | "Core", 68 | "RHI", 69 | "Renderer", 70 | "RenderCore", 71 | "Projects" 72 | // ... add other public dependencies that you statically link with here ... 73 | } 74 | ); 75 | 76 | PrivateDependencyModuleNames.AddRange( 77 | new string[] 78 | { 79 | "CoreUObject", 80 | "Engine", 81 | "Slate", 82 | "SlateCore" 83 | // ... add private dependencies that you statically link with here ... 84 | } 85 | ); 86 | 87 | DynamicallyLoadedModuleNames.AddRange( 88 | new string[] 89 | { 90 | // ... add any modules that your module loads dynamically here ... 91 | } 92 | ); 93 | } 94 | } 95 | 96 | ``` 97 | 98 | Next set the modules loading phase inside YourPluginName.uplugin to “PostConfigInit” 99 | 100 | ```cpp 101 | { 102 | "FileVersion": 3, 103 | "Version": 1, 104 | "VersionName": "1.0", 105 | "FriendlyName": "YourPluginName", 106 | "Description": "", 107 | "Category": "Other", 108 | "CreatedBy": "", 109 | "CreatedByURL": "", 110 | "DocsURL": "", 111 | "MarketplaceURL": "", 112 | "SupportURL": "", 113 | "CanContainContent": true, 114 | "IsBetaVersion": false, 115 | "IsExperimentalVersion": false, 116 | "Installed": false, 117 | "Modules": [ 118 | { 119 | "Name": "YourPluginName", 120 | "Type": "Runtime", 121 | "LoadingPhase": "PostConfigInit" // Set it Here 122 | } 123 | ] 124 | } 125 | ``` 126 | 127 | The next step is to bind the shader folder with Unreal so it can find and compile our custom shader code. 128 | 129 | ```cpp 130 | void FYourPluginNameModule::StartupModule() 131 | { 132 | // This code will execute after your module is loaded into memory; the exact timing is specified in the .uplugin file permodule 133 | FString BaseDir = IPluginManager::Get().FindPlugin(TEXT("YourPluginName"))->GetBaseDir(); 134 | FString PluginShaderDir = FPaths::Combine(BaseDir, TEXT("Shaders")); 135 | AddShaderSourceDirectoryMapping(TEXT("/CustomShaders"), PluginShaderDir); 136 | } 137 | void FYourPluginNameModule::ShutdownModule() 138 | { 139 | // This function may be called during shutdown to clean up your module. For modules that support dynamic reloading, 140 | // we call this function before unloading the module. 141 | } 142 | IMPLEMENT_MODULE(FYourPluginNameModule, YourPluginNamePlugin) 143 | ``` 144 | 145 | You need to include "Interfaces/IPluginManager.h" so you can get access to the helper functions that grabs the base directory to your plugin shader folder 146 | directory. The `“AddShaderSourceDirectoryMapping(TEXT("/CustomShaders")`, PluginShaderDir)” line basically binds a` virtual folder` called “/CustomShaders” I 147 | don’t think the name matters but you can name it whatever. I could be wrong. 148 | 149 | Next go into YourPlugin Folder to add a MyViewExtensionSubSystem.cpp file and MyViewExtensionSubSystem.h inside your Private and Public folder 150 | respectively. 151 | 152 | The subsystem is ideal to create a custom FSceneViewExtensionBase pointer to this is object during `PostConfigInit` phase. This allows us to draw our 153 | triangle to the editor viewport. You could declare a` TSharedPtr` object in MyCharacter.h and instantiate it in 154 | `BeginPlay`. So it renders the triangle after pressing play in the editor. 155 | 156 | **Module Setup** 157 | 158 | **MyViewExtensionSubSystem.cpp** 159 | 160 | ```cpp 161 | // Fill out your copyright notice in the Description page of Project Settings. 162 | #include "UViewExtensionSubSystem.h" 163 | #include "SceneViewExtension.h" 164 | #include "MyViewExtension.h" 165 | void UMyViewExtensionSubSystem::Initialize(FSubsystemCollectionBase& Collection) 166 | { 167 | Super::Initialize(Collection); 168 | // Create Shared Pointer Call 169 | UE_LOG(LogTemp, Warning, TEXT("View Extension SubSystem Init")); 170 | // This is the Pointer to the FSceneViewExenstion you will see later on 171 | // You need this line to run your shader. 172 | this->ShaderTest = FSceneViewExtensions::NewExtension(); 173 | } 174 | ``` 175 | 176 | **MyViewExtensionSubSystem.h** 177 | 178 | ```cpp 179 | // Fill out your copyright notice in the Description page of Project Settings. 180 | #pragma once 181 | #include "CoreMinimal.h" 182 | #include "Subsystems/EngineSubsystem.h" 183 | #include "MyViewExtensionSystem.generated.h" 184 | class FViewExtension; 185 | /** 186 | * 187 | */ 188 | UCLASS() 189 | class MyViewExtensionSubSystem: public UEngineSubsystem 190 | { 191 | GENERATED_BODY() 192 | protected: 193 | // Declaration of the Pointer delegate Unreal's FViewExenstion object gives us 194 | TSharedPtr ShaderTest; 195 | public: 196 | virtual void Initialize(FSubsystemCollectionBase& Collection) override; 197 | }; 198 | ``` 199 | 200 | ### FSceneViewExtensionBase 201 | 202 | This base class is important because it allows you to hook into the render pipeline and use its delegates to insert your own custom render pass. Before `5.1`, in 203 | order to create your own custom rendering pass a plugin or module was still needed to initialize your shader folder. However without this class you will need to 204 | do additional C++ work to add your own delegates inside the render pipeline source code for a custom rendering pass. 205 | 206 | In this tutorial we will be overriding a delegate function in the FSceneViewExtensionBase class to insert a pass in Post Processing phase of the render pipeline. 207 | 208 | In Engine Source at line 411 inside PostProcessing.cpp the delegates of FSceneViewExtensionBase are added. 209 | 210 | ```cpp 211 | // ../Engine../PostProcessing.cpp" 212 | const auto AddAfterPass = [&](EPass InPass, FScreenPassTexture InSceneColor) -> FScreenPassTexture 213 | { 214 | // In some cases (e.g. OCIO color conversion) we want View Extensions to be able to add extra custom post processing after 215 | // the pass. 216 | FAfterPassCallbackDelegateArray& PassCallbacks = PassSequence.GetAfterPassCallbacks(InPass); 217 | 218 | if (PassCallbacks.Num()) 219 | { 220 | FPostProcessMaterialInputs InOutPostProcessAfterPassInputs = GetPostProcessMaterialInputs(InSceneColor); 221 | 222 | for (int32 AfterPassCallbackIndex = 0; AfterPassCallbackIndex < PassCallbacks.Num(); AfterPassCallbackIndex++) 223 | { 224 | InOutPostProcessAfterPassInputs.SetInput(EPostProcessMaterialInput::SceneColor, InSceneColor); 225 | 226 | FAfterPassCallbackDelegate& AfterPassCallback = PassCallbacks[AfterPassCallbackIndex]; 227 | 228 | PassSequence.AcceptOverrideIfLastPass(InPass, InOutPostProcessAfterPassInputs.OverrideOutput, 229 | AfterPassCallbackIndex); 230 | 231 | InSceneColor = AfterPassCallback.Execute(GraphBuilder, View, InOutPostProcessAfterPassInputs); 232 | } 233 | } 234 | } 235 | ``` 236 | 237 | 238 | You see at `AddAfterPass`, `FScreenPassTexture` `InSceneColor` is passing a reference of “SceneColor” to the delegates overridden.` SceneColor` is a 239 | `FScreenPassTexture` that describes a texture paired with a viewport rect. The Scene Color texture is written too many times throughout PostProcessing pipeline 240 | and will be the texture we will draw our triangle onto. This will be our “Render Target”. 241 | 242 | **Class Setup** 243 | 244 | Let’s create another class header and CPP file inside our plugin public/private source folder. Call it MyViewExtension (or whatever you like). 245 | 246 | ```cpp 247 | #pragma once 248 | #include "TriangleShader.h" 249 | #include "SceneViewExtension.h" 250 | #include "RenderResource.h" 251 | class YOURPLUGINNAME_API FMyViewExtension : public FSceneViewExtensionBase { 252 | public: 253 | FMyViewExtension(const FAutoRegister& AutoRegister); 254 | //~ Begin FSceneViewExtensionBase Interface 255 | virtual void SetupViewFamily(FSceneViewFamily& InViewFamily) override {} 256 | virtual void SetupView(FSceneViewFamily& InViewFamily, FSceneView& InView) override {}; 257 | virtual void BeginRenderViewFamily(FSceneViewFamily& InViewFamily) override; 258 | virtual void PreRenderViewFamily_RenderThread(FRDGBuilder& GraphBuilder, FSceneViewFamily& InViewFamily) override{}; 259 | virtual void PreRenderView_RenderThread(FRDGBuilder& GraphBuilder, FSceneView& InView) override; 260 | virtual void PostRenderBasePass_RenderThread(FRHICommandListImmediate& RHICmdList, FSceneView& InView) override {}; 261 | virtual void PrePostProcessPass_RenderThread(FRDGBuilder& GraphBuilder, const FSceneView& View, const 262 | FPostProcessingInputs& Inputs) override; 263 | virtual void SubscribeToPostProcessingPass(EPostProcessingPass Pass, FAfterPassCallbackDelegateArray& InOutPassCallbacks, 264 | bool bIsPassEnabled)override; 265 | }; 266 | ``` 267 | 268 | Here we have a couple delegates denoted with `_RenderThread` and other setup functions. We will be overriding `SubscribeToPostProcessingPass` only. 269 | 270 | Let’s setup some functions in the MyViewExtension.cpp file. 271 | 272 | ```cpp 273 | #include "ViewExtension.h" 274 | #include "TriangleShader.h" 275 | #include "PixelShaderUtils.h" 276 | #include "PostProcess/PostProcessing.h" 277 | #include "PostProcess/PostProcessMaterial.h" 278 | #include "SceneTextureParameters.h" 279 | #include "ShaderParameterStruct.h" 280 | // This Line Declares the Name of your render Pass so you can see it in the render debugger 281 | DECLARE_GPU_DRAWCALL_STAT(TrianglePass); 282 | FMyViewExtension::FMyViewExtension(const FAutoRegister& AutoRegister) : FSceneViewExtensionBase(AutoRegister) { 283 | 284 | } 285 | // Begin FLensFlareScene View 286 | FMyViewExtension::FMyViewExtension(const FAutoRegister& AutoRegister) : FViewExtension(AutoRegister) { 287 | 288 | } 289 | void FMyViewExtension::SubscribeToPostProcessingPass(EPostProcessingPass Pass, FAfterPassCallbackDelegateArray& 290 | InOutPassCallbacks, bool bIsPassEnabled) 291 | { 292 | if (Pass == EPostProcessingPass::Tonemap) 293 | { 294 | // Create Raw Delegate Here, see later on 295 | } 296 | } 297 | ``` 298 | 299 | Inside `SubscribeToPostProcessingPass` function I want to bring attention to the if statement. The if statement uses `EPostProcessing` Enum to define where in the 300 | Post Processing phase you want to insert a render pass. I’ve opted to do it after `Tonemap`, however you can choose other points in the pipeline defined by that 301 | enum. For now the goal is to draw the triangle to the Scene Color since it’s available during the entire Post Process Pass. 302 | 303 | ### Setting up Global Shaders 304 | 305 | The next step is to create two Global shaders. One being our custom Vertex shader that handles the triangle vertex data stored in the vertex buffer. The second 306 | shader being a pixel shader which will color our triangle after the rasterizer processed our vertices. 307 | 308 | Again, inside the Plugin source folder we create a separate cpp/header file. 309 | TriangleShader.cpp and TriangleShader.h 310 | 311 | **Vertex Shader Class** 312 | 313 | ```cpp 314 | // TrangleShader.h 315 | // Defined here so we can access it in View Extension. 316 | BEGIN_SHADER_PARAMETER_STRUCT(FTriangleVSParams,) 317 | //RENDER_TARGET_BINDING_SLOTS() 318 | END_SHADER_PARAMETER_STRUCT() 319 | class FTriangleVS : public FGlobalShader 320 | { 321 | public: 322 | DECLARE_GLOBAL_SHADER(FTriangleVS); 323 | SHADER_USE_PARAMETER_STRUCT(FTriangleVS, FGlobalShader) 324 | using FParameters = FTriangleVSParams; 325 | 326 | static bool ShouldCompilePermutation(const FGlobalShaderPermutationParameters& Parameters) { 327 | return true; 328 | } 329 | }; 330 | ``` 331 | 332 | Keeping it simple we aren’t creating any new crazy macro specific buffers or resources for the vertex shader. The RDG at minimum requires the macro 333 | declaration. 334 | 335 | **Pixel Shader Class** 336 | 337 | ```cpp 338 | // TrangleShader.h 339 | BEGIN_SHADER_PARAMETER_STRUCT(FTrianglePSParams,) 340 | RENDER_TARGET_BINDING_SLOTS() 341 | END_SHADER_PARAMETER_STRUCT() 342 | class FTrianglePS: public FGlobalShader 343 | { 344 | DECLARE_GLOBAL_SHADER(FTrianglePS); 345 | using FParameters = FTrianglePSParams; 346 | SHADER_USE_PARAMETER_STRUCT(FTrianglePS, FGlobalShader) 347 | }; 348 | ``` 349 | 350 | 351 | `RENDER_TARGET_BINDING_SLOTS()` is the only resource we are passing, we are binding the viewport information or Render Target to our custom shader HLSL 352 | code. 353 | 354 | In both classes we need to declare them as global shaders and define the local parameter struct the shader needs. In C++ “using” gives access to the type 355 | FParamters as `FTrianglePSParams` when declaring a pointer of that type for use elsewhere. 356 | 357 | Let’s add the Triangle HLSL code in our Shader Folder. 358 | 359 | Go into the Shader folder and create a file called `Triangle.usf` then paste the code below and save it. 360 | 361 | ```c 362 | #include "/Engine/Public/Platform.ush" 363 | #include "/Engine/Private/Common.ush" 364 | #include "/Engine/Private/ScreenPass.ush" 365 | #include "/Engine/Private/PostProcessCommon.ush" 366 | void TriangleVS( 367 | in float2 InPosition : ATTRIBUTE0, 368 | in float4 InColor : ATTRIBUTE1, 369 | out float4 OutPosition : SV_POSITION, 370 | out float4 OutColor : COLOR0 371 | ) 372 | { 373 | OutPosition = float4(InPosition, 0, 1); 374 | OutColor = InColor; 375 | } 376 | void TrianglePS( 377 | in float4 InPosition : SV_POSITION, 378 | in float4 InColor : COLOR0, 379 | out float4 OutColor : SV_Target0) 380 | { 381 | OutColor = InColor; 382 | } 383 | ``` 384 | 385 | ### Creating Vertex and Index Buffers 386 | 387 | At this point we’ve introduced mostly all the classes involved in the Rendering Pass. We still need to define the resources classes for our `Vertex Buffer` and `Index 388 | `Buffer`.` 389 | 390 | Inside `TrangleShader.h` 391 | 392 | Add a struct the defines our Colored Vertex 393 | 394 | ```cpp 395 | /** The vertex data used to filter a texture. */ 396 | // TrangleShader.h 397 | struct FColorVertex 398 | { 399 | public: 400 | FVector2f Position; 401 | FVector4f Color; 402 | }; 403 | ``` 404 | 405 | Add the `FVertexBuffer` class 406 | 407 | ```cpp 408 | // TrangleShader.h 409 | /** 410 | * Static vertex and index buffer used for 2D screen rectangles. 411 | */ 412 | class FTriangleVertexBuffer : public FVertexBuffer 413 | { 414 | public: 415 | /** Initialize the RHI for this rendering resource */ 416 | void InitRHI() override { 417 | TResourceArray Vertices; 418 | Vertices.SetNumUninitialized(3); 419 | Vertices[0].Position = FVector2f(0.0f,0.75f); 420 | Vertices[0].Color = FVector4f(1, 0, 0, 1); 421 | Vertices[1].Position = FVector2f(0.75,-0.75); 422 | Vertices[1].Color = FVector4f(0, 1, 0, 1); 423 | Vertices[2].Position = FVector2f(-0.75,-0.75); 424 | Vertices[2].Color = FVector4f(0, 0, 1, 1); 425 | FRHIResourceCreateInfo CreateInfo(TEXT("FScreenRectangleVertexBuffer"), &Vertices); 426 | VertexBufferRHI = RHICreateVertexBuffer(Vertices.GetResourceDataSize(), BUF_Static, CreateInfo); 427 | } 428 | }; 429 | ``` 430 | 431 | Inside `FTriangleVertexBuffer` override `InitRHI` to initialize the vertex data. In GPU programming you need to create a context so you can bind resources from the 432 | CPU before doing a memory copy to the GPU. In order to achieve this a context must be specified holding the name of the resource, size and other properties. 433 | Set `VertexBufferRHI` object with `RHICreateVertexBuffer` and pass the required information. 434 | 435 | Adding the Index Buffer 436 | 437 | ```cpp 438 | // TrangleShader.h 439 | class FTriangleIndexBuffer : public FIndexBuffer 440 | { 441 | public: 442 | /** Initialize the RHI for this rendering resource */ 443 | void InitRHI() override 444 | { 445 | const uint16 Indices[] = { 0, 1, 2 }; 446 | TResourceArray IndexBuffer; 447 | uint32 NumIndices = UE_ARRAY_COUNT(Indices); 448 | IndexBuffer.AddUninitialized(NumIndices); 449 | FMemory::Memcpy(IndexBuffer.GetData(), Indices, NumIndices * sizeof(uint16)); 450 | FRHIResourceCreateInfo CreateInfo(TEXT("FTriangleIndexBuffer"), &IndexBuffer); 451 | IndexBufferRHI = RHICreateIndexBuffer(sizeof(uint16), IndexBuffer.GetResourceDataSize(), BUF_Static, CreateInfo); 452 | } 453 | }; 454 | ``` 455 | 456 | Same process here, you don’t need to use a `index buffer` for something a simple as a triangle. Index buffers hold pointers to our vertex data in the vertex buffer. 457 | 458 | Usually the index buffer will read a combination of three vertices, but you can make any combination you want. This is optimal because we can reuse vertex data 459 | to draw a triangle, quad or other primitives depending on your use case. 460 | 461 | Lastly add a global declaration resource for the `Vertex Buffer`, this is used to define the input layout for the` Input assembler` so we can bind the correct 462 | attributes in the HLSL code. 463 | 464 | ```cpp 465 | // TrangleShader.h 466 | class FTriangleVertexDeclaration : public FRenderResource 467 | { 468 | public: 469 | FVertexDeclarationRHIRef VertexDeclarationRHI; 470 | /** Destructor. */ 471 | virtual ~FTriangleVertexDeclaration() {} 472 | virtual void InitRHI() 473 | { 474 | FVertexDeclarationElementList Elements; 475 | uint16 Stride = sizeof(FColorVertex); 476 | Elements.Add(FVertexElement(0, STRUCT_OFFSET(FColorVertex, Position), VET_Float2, 0, Stride)); 477 | Elements.Add(FVertexElement(0, STRUCT_OFFSET(FColorVertex, Color), VET_Float4, 1, Stride)); 478 | VertexDeclarationRHI = PipelineStateCache::GetOrCreateVertexDeclaration(Elements); 479 | } 480 | virtual void ReleaseRHI() 481 | { 482 | VertexDeclarationRHI.SafeRelease(); 483 | } 484 | }; 485 | ``` 486 | 487 | 488 | Again we override `InitRHI` to set our declaration information. For the input layout we define an elements object and the `stride`. In computer graphics the `stride` 489 | refers to the number of bytes between the start of one element and the start of the next element in an array of elements stored in memory. 490 | 491 | For the input assembler to read the `vertex buffer` properly it needs to know the number of bytes between the start of one vertex and the start of the next vertex. 492 | 493 | We can use `sizeof` function to compute the space of one vertex as an offset for next vertices. You also notice `STRUCT_OFFSET` is a macro helper to determine the offset between the values defined in our struct. 494 | 495 | Next declare the resource global and extern them so the Renderer API can see it. 496 | 497 | ```cpp 498 | // TriangleShader.h 499 | extern YOURPLUGIN_API TGlobalResource GTriangleVertexBuffer; 500 | extern YOURPLUGIN_API TGlobalResource GTriangleIndexBuffer; 501 | extern YOURPLUGIN_API TGlobalResource GTriangleVertexDeclaration; 502 | ``` 503 | 504 | back in `Triangle.cpp` declare the starting points for our shaders to match the function names in HLSL. 505 | 506 | ```cpp 507 | #include "TriangleShader.h" 508 | #include "Shader.h" 509 | #include "VertexFactory.h" 510 | 511 | // Define our Vertex Shader and Pixel Shader Starting point 512 | // This is needed for all shaders 513 | 514 | IMPLEMENT_SHADER_TYPE(,FTriangleVS, TEXT("/CustomShaders/Triangle.usf"),TEXT("TriangleVS"),SF_Vertex); 515 | IMPLEMENT_SHADER_TYPE(,FTrianglePS,TEXT("/CustomShaders/Triangle.usf"),TEXT("TrianglePS"),SF_Pixel); 516 | 517 | TGlobalResource GTriangleVertexBuffer; 518 | TGlobalResource GTriangleIndexBuffer; 519 | TGlobalResource GTriangleVertexDeclaration; 520 | ``` 521 | 522 | Lastly define our `TGlobalResources` objects. 523 | 524 | ### Adding an RDG Pass 525 | 526 | Returning to update your MyViewExtension cpp/header files. 527 | 528 | ```cpp 529 | // MyViewExtension.h 530 | class YOURPLUGIN_API FMyViewExtension : public FViewExtension 531 | { 532 | public: 533 | FLensFlareSceneView(const FAutoRegister& AutoRegister); 534 | 535 | virtual void SubscribeToPostProcessingPass(EPostProcessingPass Pass, FAfterPassCallbackDelegateArray& InOutPassCallbacks, bool bIsPassEnabled) override; 536 | 537 | protected: 538 | // Copied from PixelShaderUtils 539 | template 540 | static void AddFullscreenPass( 541 | FRDGBuilder& GraphBuilder, 542 | const FGlobalShaderMap* GlobalShaderMap, 543 | FRDGEventName&& PassName, 544 | const TShaderRef& PixelShader, 545 | typename TShaderClass::FParameters* Parameters, 546 | const FIntRect& Viewport, 547 | FRHIBlendState* BlendState = nullptr, 548 | FRHIRasterizerState* RasterizerState = nullptr, 549 | FRHIDepthStencilState* DepthStencilState = nullptr, 550 | uint32 StencilRef = 0 551 | ); 552 | 553 | template 554 | static void DrawFullscreenPixelShader( 555 | FRHICommandList& RHICmdList, 556 | const FGlobalShaderMap* GlobalShaderMap, 557 | const TShaderRef& PixelShader, 558 | const typename TShaderClass::FParameters& Parameters, 559 | const FIntRect& Viewport, 560 | FRHIBlendState* BlendState = nullptr, 561 | FRHIRasterizerState* RasterizerState = nullptr, 562 | FRHIDepthStencilState* DepthStencilState = nullptr, 563 | uint32 StencilRef = 0 564 | ); 565 | 566 | static inline void DrawFullScreenTriangle(FRHICommandList& RHICmdList, uint32 InstanceCount); 567 | 568 | // A delegate that is called when the Tone mapper pass finishes 569 | FScreenPassTexture TrianglePass_RenderThread(FRDGBuilder& GraphBuilder, const FSceneView& View, const FPostProcessMaterialInputs& Inputs); 570 | 571 | // For now we try to build a vertex buffer and see if it puts some shit in it 572 | public: 573 | static void RenderTriangle( 574 | FRDGBuilder& GraphBuilder, 575 | const FGlobalShaderMap* ViewShaderMap, 576 | const FIntRect& View, 577 | const FScreenPassTexture& SceneColor 578 | ); 579 | }; 580 | ``` 581 | 582 | Remember the order starts with adding a pass by giving the RDG lambda function the resources and parameters it needs. Next is defining the draw call(s) where 583 | you setup the GPU Pipeline. 584 | 585 | You **MUST** setup the GPU pipeline because it defines all the parameters for the draw call (Blend States, Rasterizer State, Primitive type, Shaders, Viewport, Render Targets, Commands). 586 | 587 | The GPU Pipeline stores the state to be interpreted for the low lever graphics API calls being used. 588 | 589 | First function is the templated `AddFullscreenPass` with the required parameters needed for the lambda function, nulling parameters we won’t use for the draw 590 | call. 591 | 592 | ```cpp 593 | // MyViewExtension.cpp 594 | template 595 | void FMyViewExtension::AddFullscreenPass( 596 | FRDGBuilder& GraphBuilder, 597 | const FGlobalShaderMap* GlobalShaderMap, 598 | FRDGEventName&& PassName, 599 | const TShaderRef& PixelShader, 600 | typename TShaderClass::FParameters* Parameters, 601 | const FIntRect& Viewport, 602 | FRHIBlendState* BlendState, 603 | FRHIRasterizerState* RasterizerState, 604 | FRHIDepthStencilState* DepthStencilState, 605 | uint32 StencilRef) 606 | { 607 | check(PixelShader.IsValid()); 608 | ClearUnusedGraphResources(PixelShader, Parameters); 609 | 610 | GraphBuilder.AddPass( 611 | Forward(PassName), 612 | Parameters, 613 | ERDGPassFlags::Raster, 614 | [Parameters, GlobalShaderMap, PixelShader, Viewport, BlendState, RasterizerState, DepthStencilState, StencilRef] 615 | (FRHICommandList& RHICmdList) 616 | { 617 | FLensFlareSceneView::DrawFullscreenPixelShader( 618 | RHICmdList, GlobalShaderMap, PixelShader, *Parameters, Viewport, 619 | BlendState, RasterizerState, DepthStencilState, StencilRef); 620 | }); 621 | } 622 | ``` 623 | 624 | 625 | ### Setup the Draw Call 626 | 627 | ```cpp 628 | template 629 | void FMyViewExtension::DrawFullscreenPixelShader( 630 | FRHICommandList& RHICmdList, 631 | const FGlobalShaderMap* GlobalShaderMap, 632 | const TShaderRef& PixelShader, 633 | const typename TShaderClass::FParameters& Parameters, 634 | const FIntRect& Viewport, 635 | FRHIBlendState* BlendState, 636 | FRHIRasterizerState* RasterizerState, 637 | FRHIDepthStencilState* DepthStencilState, 638 | uint32 StencilRef) 639 | { 640 | check(PixelShader.IsValid()); 641 | 642 | RHICmdList.SetViewport( 643 | (float)Viewport.Min.X, (float)Viewport.Min.Y, 0.0f, 644 | (float)Viewport.Max.X, (float)Viewport.Max.Y, 1.0f); 645 | 646 | // Begin Setup Gpu Pipeline for this Pass 647 | FGraphicsPipelineStateInitializer GraphicsPSOInit; 648 | TShaderMapRef VertexShader(GlobalShaderMap); 649 | 650 | RHICmdList.ApplyCachedRenderTargets(GraphicsPSOInit); 651 | 652 | GraphicsPSOInit.BlendState = TStaticBlendState<>::GetRHI(); 653 | GraphicsPSOInit.RasterizerState = TStaticRasterizerState<>::GetRHI(); 654 | GraphicsPSOInit.DepthStencilState = TStaticDepthStencilState::GetRHI(); 655 | GraphicsPSOInit.BoundShaderState.VertexDeclarationRHI = GTriangleVertexDeclaration.VertexDeclarationRHI; 656 | GraphicsPSOInit.BoundShaderState.VertexShaderRHI = VertexShader.GetVertexShader(); 657 | GraphicsPSOInit.BoundShaderState.PixelShaderRHI = PixelShader.GetPixelShader(); 658 | GraphicsPSOInit.PrimitiveType = PT_TriangleList; 659 | 660 | GraphicsPSOInit.BlendState = BlendState ? BlendState : GraphicsPSOInit.BlendState; 661 | GraphicsPSOInit.RasterizerState = RasterizerState ? RasterizerState : GraphicsPSOInit.RasterizerState; 662 | GraphicsPSOInit.DepthStencilState = DepthStencilState ? DepthStencilState : GraphicsPSOInit.DepthStencilState; 663 | 664 | // End Gpu Pipeline setup 665 | SetGraphicsPipelineState(RHICmdList, GraphicsPSOInit, StencilRef); 666 | SetShaderParameters(RHICmdList, PixelShader, PixelShader.GetPixelShader(), Parameters); 667 | DrawFullScreenTriangle(RHICmdList, 1); 668 | } 669 | 670 | ``` 671 | 672 | I created a `RenderTriangle` function to wrap my Add Pass and Draw Call Functions 673 | 674 | ```cpp 675 | // FMyViewExtension.cpp 676 | void FMyViewExtension::RenderTriangle( 677 | FRDGBuilder& GraphBuilder, 678 | const FGlobalShaderMap* ViewShaderMap, 679 | const FIntRect& ViewInfo, 680 | const FScreenPassTexture& SceneColor) 681 | { 682 | // Begin Setup 683 | // Shader Parameter Setup 684 | FTrianglePSParams* PassParams = GraphBuilder.AllocParameters(); 685 | 686 | // Set the Render Target In this case is the Scene Color 687 | PassParams->RenderTargets[0] = FRenderTargetBinding(SceneColor.Texture, ERenderTargetLoadAction::ENoAction); 688 | 689 | // Create FTrianglePS Pixel Shader 690 | TShaderMapRef PixelShader(ViewShaderMap); 691 | 692 | // Add Pass 693 | AddFullscreenPass(GraphBuilder, 694 | ViewShaderMap, 695 | RDG_EVENT_NAME("TranglePass"), 696 | PixelShader, 697 | PassParams, 698 | ViewInfo); 699 | } 700 | ``` 701 | 702 | ### Binding the Delegate 703 | 704 | I created the `TrianglePass_RenderThread` function. 705 | 706 | ```cpp 707 | FScreenPassTexture FMyViewExtension::TrianglePass_RenderThread(FRDGBuilder& GraphBuilder, const FSceneView& View, const FPostProcessMaterialInputs& InOutInputs) 708 | { 709 | const FScreenPassTexture SceneColor = InOutInputs.GetInput(EPostProcessMaterialInput::SceneColor); 710 | 711 | RDG_GPU_STAT_SCOPE(GraphBuilder, TrianglePass) 712 | RDG_EVENT_SCOPE(GraphBuilder, "TrianglePass"); 713 | 714 | // Casting the FSceneView to FViewInfo 715 | const FIntRect ViewInfo = static_cast(View).ViewRect; 716 | const FGlobalShaderMap* ViewShaderMap = static_cast(View).ShaderMap; 717 | 718 | RenderTriangle(GraphBuilder, ViewShaderMap, ViewInfo, SceneColor); 719 | 720 | return SceneColor; 721 | } 722 | ``` 723 | 724 | Before calling `RenderTriangle`, you need to get the `SceneColor` texture and perform a couple `static_casts`; one for the viewport dimensions and the other for a pointer to the global shader map. The global shader map points to our custom shader objects we declared using the global macros. 725 | 726 | Lastly add the delegate function `TrianglePass_RenderThread` inside the if statement I talked about before. 727 | 728 | ```cpp 729 | void FLensFlareSceneView::SubscribeToPostProcessingPass(EPostProcessingPass Pass, FAfterPassCallbackDelegateArray& 730 | InOutPassCallbacks, bool bIsPassEnabled) 731 | { 732 | if (Pass == EPostProcessingPass::Tonemap) 733 | { 734 | InOutPassCallbacks.Add(FAfterPassCallbackDelegate::CreateRaw(this, &FLensFlareSceneView::TrianglePass_RenderThread)); 735 | } 736 | } 737 | ``` 738 | 739 | **Compile** 740 | 741 | You should see the triangle drawn to the viewport of your editor. 742 | 743 | ![[Unreal Engine Render Dependency Graph/Diagrams/TriangleOutput.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/5110a92b8c25e1eab6f28e73456570649c2d0470/Diagrams/TriangleOutput.png) 744 | 745 | 746 | -------------------------------------------------------------------------------- /Unreal Engine Rendering Pipeline.md: -------------------------------------------------------------------------------- 1 | # Unreal Engine Rendering Pipeline 2 | 3 | Before getting into what the rendering pipeline of Unreal, the concept of a rendering pass and deferred rendering needs to addressed. 4 | 5 | A `Rendering Pass `is set of one to many draw calls executed on the GPU. Usually many draw calls are grouped together to ensure proper order of execution. This 6 | is because the output of a previous pass may be used as input for other sequential passes. 7 | 8 | `Deferred rendering` is a default method in Unreal that renders lights and materials in a separate pass. This separate pass waits for the base pass to accumulate 9 | the information about key information such as opacity, specular, diffusion, normals etc. An example below shows how the deferred rendering works. 10 | 11 | ![[Unreal Engine Render Dependency Graph/Diagrams/DeferredRender.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/a49756d90f6362b9a8ab10e4ac16edddac530e03/Diagrams/DeferredRender.png) 12 | 13 | Instead of computing lighting and shading for each pixel as it rasterized, Unreal uses deferred rendering to capture information about the scene’s geometry into 14 | the “off-screen” Gbuffers. Then it’s used on a second pass to apply lighting and shading to the scene. The advantage of this is that it allows for more efficient use 15 | of the GPU. Separation of the geometry from lighting and shading can improve performance since it allows the GPU to process large number of lights and 16 | effects simultaneously. Additionally this allows for flexibility with dynamic lighting and complex lighting setups. 17 | 18 | ## Pass Order in Unreal Engine 19 | 20 | These Passes may change but in general this is the order of things. I recommend downloading RenderDoc and hooking the engine to see for yourself. 21 | 22 | **Base Pass** 23 | - Rendering final attributes of Opaque or Masked materials to the G-Buffer 24 | - Reading static lighting and saving it to the G-Buffer 25 | - Applying DBuffer decals 26 | - Applying fog 27 | - Calculating final velocity (from packed 3D velocity) 28 | - In forward renderer: dynamic lighting 29 | 30 | In Deferred mode the base pass saves the properties of materials into the GBuffer as highlighted earlier and leaves it for calculation of lighting later on. 31 | 32 | **Geometry Passes** 33 | 34 | The Geometry pass is where the meshes get drawn and prioritized before lighting. 35 | 36 | #### PrePass 37 | 38 | ![[Unreal Engine Render Dependency Graph/Diagrams/PrePass.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/a49756d90f6362b9a8ab10e4ac16edddac530e03/Diagrams/PrePass.png) 39 | 40 | Early Rendering of Depth Z-Buffer, which is used to optimize out meshes bases on translucency. This also is used to optimize out meshes that are hidden behind 41 | other meshes to cull them out (to not render them) 42 | 43 | **HZB** 44 | - Generates a hierarchy Z-Buffer 45 | The HZB is used by an occlusion culling method and by screen space techniques for ambient occlusion and reflection. 46 | 47 | **Render Velocities** 48 | - Saves velocity of each vertex (used later by motion blur and temporal anti-aliasing) 49 | 50 | `Velocity` is a buffer that measures the velocity of every moving vertex and saves it into the motion blur velocity buffer. The Velocity buffer compares the difference between the current frame and one frame behind to create a mask. In `Doom 2016`they use this mask to render only meshes that are not static in a 51 | scene to optimize rendering of meshes that moved in the next frame. 52 | 53 | **Lighting Pass** 54 | This is the most hardcore part of the frame, especially with a lot of dynamic and shadowed light sources. 55 | **Direct Lighting** 56 | - Optimized lighting in forward shading 57 | **Non-Shadowed Lights** 58 | - Lights in deferred rendering that don’t cast shadows 59 | **Shadowed Lights** 60 | - Lights that obviously cast dynamic shadows 61 | **Shadow Depths** 62 | - Generates depth maps for shadow-casting lights 63 | 64 | ![[Unreal Engine Render Dependency Graph/Diagrams/ShadowProjection.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/a49756d90f6362b9a8ab10e4ac16edddac530e03/Diagrams/ShadowProjection.png) 65 | 66 | **Shadow Projection** 67 | - Final Rendering of Shadows 68 | 69 | **Indirect Lighting** 70 | ![[Unreal Engine Render Dependency Graph/Diagrams/IndirectLighting.png]](https://github.com/staticJPL/Render-Dependency-Graph-Documentation/blob/a49756d90f6362b9a8ab10e4ac16edddac530e03/Diagrams/IndirectLighting.png) 71 | - Screen space ambient occlusion 72 | - Decals (non-Buffer type) 73 | 74 | **Composition After Lighting** 75 | - Handles subsurface scattering 76 | 77 | **Translucency and lighting** 78 | - Renders translucent materials 79 | - Lighting of materials that use surface forward shading. 80 | 81 | **Reflections** 82 | - Reading and blending reflection capture actors’ results into a full-screen reflection buffer 83 | 84 | **Screen Space Reflections** 85 | - Real-Time dynamic reflections 86 | - Done in Post process using a screen-space ray tracing technique 87 | 88 | **Post Processing** 89 | The post processing is the last pass of the render pipeline and is the part of the rendering process we will draw our triangle later on. 90 | 91 | - Depth of Field (BokehDOFRecombine) 92 | - Temporal anti-aliasing (TemporalAA) 93 | - Reading velocity values (VelocityFlatten) 94 | - Motion blur (MotionBlur) 95 | - Auto exposure (PostProcessEyeAdaptation) 96 | - Tone mapping (Tonemapper) 97 | - Upscaling from rendering resolution to display’s resolution (PostProcessUpscale) 98 | --------------------------------------------------------------------------------