├── .gitignore ├── .travis.yml ├── CHANGES.md ├── CODE_OF_CONDUCT.md ├── Cargo.toml ├── LICENSE-APACHE ├── LICENSE-MIT ├── README.md ├── appveyor.yml ├── rustfmt.toml ├── src ├── cmd.rs └── lib.rs └── tests ├── global.rs └── image.rs /.gitignore: -------------------------------------------------------------------------------- 1 | /target 2 | **/*.rs.bk 3 | Cargo.lock 4 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: rust 2 | env: 3 | - RUST_BACKTRACE=1 4 | rust: 5 | - stable 6 | - beta 7 | - nightly 8 | cache: cargo 9 | -------------------------------------------------------------------------------- /CHANGES.md: -------------------------------------------------------------------------------- 1 | # Changes 2 | 3 | ## 0.1.6 (2019-07-14) 4 | 5 | * Removed inefficient Vec on barrier structs in favor of slice references. 6 | 7 | ## 0.1.5 8 | 9 | * Updated to ash 0.29. 10 | 11 | ## 0.1.4 12 | 13 | * Minor optimizations. 14 | 15 | ## 0.1.3 16 | 17 | * Rust 2018 Edition. 18 | 19 | ## 0.1.2 (2018-11-17) 20 | 21 | * Updated to ash 0.26 22 | * Use default struct init from ash 23 | * Made function pointer structs borrowed for performance 24 | * Some minor cleanup 25 | 26 | ## 0.1.1 (2018-11-15) 27 | 28 | * Updated to ash 0.25 (Vulkan 1.1) 29 | * Added support for NVX generated commands 30 | * Added support for read-only depth/stencil + writeable depth/stencil 31 | * Added Copy and Default traits to AccessType and ImageLayout 32 | * Added Debug, Default, and Clone traits to GlobalBarrier, BufferBarrier, and ImageBarrier 33 | 34 | ## 0.1.0 (2018-08-26) 35 | 36 | * First release -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. 6 | 7 | ## Our Standards 8 | 9 | Examples of behavior that contributes to creating a positive environment include: 10 | 11 | * Using welcoming and inclusive language 12 | * Being respectful of differing viewpoints and experiences 13 | * Gracefully accepting constructive criticism 14 | * Focusing on what is best for the community 15 | * Showing empathy towards other community members 16 | 17 | Examples of unacceptable behavior by participants include: 18 | 19 | * The use of sexualized language or imagery and unwelcome sexual attention or advances 20 | * Trolling, insulting/derogatory comments, and personal or political attacks 21 | * Public or private harassment 22 | * Publishing others' private information, such as a physical or electronic address, without explicit permission 23 | * Other conduct which could reasonably be considered inappropriate in a professional setting 24 | 25 | ## Our Responsibilities 26 | 27 | Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. 28 | 29 | Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. 30 | 31 | ## Scope 32 | 33 | This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. 34 | 35 | ## Enforcement 36 | 37 | Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at graham@wihlidal.ca. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. 38 | 39 | Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. 40 | 41 | ## Attribution 42 | 43 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] 44 | 45 | [homepage]: http://contributor-covenant.org 46 | [version]: http://contributor-covenant.org/version/1/4/ -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "vk-sync" 3 | version = "0.1.6" 4 | license = "MIT/Apache-2.0" 5 | authors = ["Graham Wihlidal "] 6 | homepage = "https://github.com/gwihlidal/vk-sync-rs" 7 | repository = "https://github.com/gwihlidal/vk-sync-rs" 8 | documentation = "https://docs.rs/vk-sync" 9 | description = "Simplification of core Vulkan synchronization mechanisms such as pipeline barriers and events." 10 | categories = ["api-bindings", "rendering", "rendering::engine", "rendering::graphics-api", ] 11 | keywords = ["vulkan", "vk", "ash", "graphics", "3d"] 12 | readme = "README.md" 13 | exclude = [ 14 | ".travis.yml", 15 | ".gitignore", 16 | "appveyor.yml" 17 | ] 18 | edition = "2018" 19 | 20 | [badges] 21 | travis-ci = { repository = "gwihlidal/vk-sync-rs" } 22 | appveyor = { repository = "gwihlidal/vk-sync-rs" } 23 | maintenance = { status = "actively-developed" } 24 | 25 | [dependencies] 26 | ash = "0.33" 27 | -------------------------------------------------------------------------------- /LICENSE-APACHE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /LICENSE-MIT: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Graham Wihlidal 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | vk-sync 2 | ======== 3 | 4 | [![vk-sync on travis-ci.com](https://travis-ci.com/gwihlidal/vk-sync-rs.svg?branch=master)](https://travis-ci.com/gwihlidal/vk-sync-rs) 5 | [![vk-sync on appveyor.com](https://ci.appveyor.com/api/projects/status/9so5ab02cqyba843/branch/master?svg=true)](https://ci.appveyor.com/project/gwihlidal/vk-sync-rs/branch/master) 6 | [![Latest version](https://img.shields.io/crates/v/vk-sync.svg)](https://crates.io/crates/vk-sync) 7 | [![Documentation](https://docs.rs/vk-sync/badge.svg)](https://docs.rs/vk-sync) 8 | [![](https://tokei.rs/b1/github/gwihlidal/vk-sync-rs)](https://github.com/gwihlidal/vk-sync-rs) 9 | 10 | Simplified Vulkan synchronization logic, written in rust. 11 | 12 | - [Documentation](https://docs.rs/vk-sync) 13 | - [Release Notes](https://github.com/gwihlidal/vk-sync-rs/releases) 14 | 15 | ## Overview 16 | 17 | In an effort to make Vulkan synchronization more accessible, this library provides an efficient simplification of core synchronization mechanisms such as pipeline barriers and events. 18 | 19 | Rather than the complex maze of enums and bit flags in Vulkan - many combinations of which are invalid or nonsensical - this library collapses this to a much shorter list of ~40 distinct usage types, and a couple of options for handling image layouts. 20 | 21 | Additionally, these usage types provide an easier mapping to other graphics APIs like DirectX 12. 22 | 23 | Use of other synchronization mechanisms such as semaphores, fences and render passes are not addressed in this library at present. 24 | 25 | ## Bindings 26 | 27 | There are a number of Vulkan ffi bindings available, and I do plan to support the most common bindings, but for now this library only implements support for [`ash`](https://crates.io/crates/ash). Please add other bindings you need via a pull-request; I would happily accept it. 28 | 29 | ## Expressiveness 30 | 31 | Despite the fact that this library is fairly simple, it expresses 99% of what you'd actually ever want to do in practice. Adding the missing expressiveness would result in increased complexity which does not seem worth the trade-off. If you have any pattern you need express, please file an issue! 32 | 33 | Here's a list of known things you cannot express: 34 | 35 | * Execution only dependencies cannot be expressed. These are occasionally useful in conjunction with semaphores, or when trying to be clever with scheduling - but their usage is both limited and fairly tricky to get right anyway. 36 | 37 | * Depth/Stencil Input Attachments can be read in a shader using either `ImageLayout::ShaderReadOnlyOptimal` or `ImageLayout::DepthStencilReadOnlyOptimal` - this library always uses `ImageLayout::DepthStencilReadOnlyOptimal`. It is possible (though highly unlikely) when aliasing images that this results in unnecessary transitions. 38 | 39 | ## Usage 40 | 41 | Add this to your `Cargo.toml`: 42 | 43 | ```toml 44 | [dependencies] 45 | vk-sync = "0.1.6" 46 | ``` 47 | 48 | and this to your crate root: 49 | 50 | ```rust 51 | extern crate vk_sync; 52 | ``` 53 | 54 | ## License 55 | 56 | Licensed under either of 57 | 58 | * Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0) 59 | * MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT) 60 | 61 | at your option. 62 | 63 | ## Credits 64 | 65 | This library is heavily based on work by Tobias Hector (https://github.com/Tobski/simple_vulkan_synchronization) 66 | 67 | ## Contribution 68 | 69 | Unless you explicitly state otherwise, any contribution intentionally submitted 70 | for inclusion in this crate by you, as defined in the Apache-2.0 license, shall 71 | be dual licensed as above, without any additional terms or conditions. 72 | 73 | ## Code of Conduct 74 | 75 | Contribution to the vk-sync crate is organized under the terms of the 76 | Contributor Covenant, the maintainer of vk-sync, @gwihlidal, promises to 77 | intervene to uphold that code of conduct. -------------------------------------------------------------------------------- /appveyor.yml: -------------------------------------------------------------------------------- 1 | os: Visual Studio 2015 2 | 3 | environment: 4 | matrix: 5 | # Stable 64-bit MSVC 6 | - channel: stable 7 | target: x86_64-pc-windows-msvc 8 | # Beta 64-bit MSVC 9 | - channel: beta 10 | target: x86_64-pc-windows-msvc 11 | # Nightly 64-bit MSVC 12 | - channel: nightly 13 | target: x86_64-pc-windows-msvc 14 | 15 | matrix: 16 | allow_failures: 17 | - channel: nightly 18 | 19 | install: 20 | - appveyor DownloadFile https://win.rustup.rs/ -FileName rustup-init.exe 21 | - rustup-init -yv --default-toolchain %channel% --default-host %target% 22 | - set PATH=%PATH%;%USERPROFILE%\.cargo\bin 23 | - rustc -vV 24 | - cargo -vV 25 | 26 | # 'cargo test' takes care of building for us, so disable Appveyor's build stage. This prevents 27 | # the "directory does not contain a project or solution file" error. 28 | build: off 29 | 30 | test_script: 31 | - cargo test --verbose %cargoflags% 32 | -------------------------------------------------------------------------------- /rustfmt.toml: -------------------------------------------------------------------------------- 1 | hard_tabs = true 2 | reorder_imports = true -------------------------------------------------------------------------------- /src/cmd.rs: -------------------------------------------------------------------------------- 1 | use super::*; 2 | use ash; 3 | 4 | /// Simplified wrapper around `vkCmdPipelineBarrier`. 5 | /// The mapping functions defined above are used to translate the passed in 6 | /// barrier definitions into a set of pipeline stages and native Vulkan memory 7 | /// barriers to be passed to `vkCmdPipelineBarrier`. 8 | /// `command_buffer` is passed unmodified to `vkCmdPipelineBarrier`. 9 | pub fn pipeline_barrier( 10 | device: &ash::Device, 11 | command_buffer: vk::CommandBuffer, 12 | global_barrier: Option, 13 | buffer_barriers: &[BufferBarrier], 14 | image_barriers: &[ImageBarrier], 15 | ) { 16 | let mut src_stage_mask = vk::PipelineStageFlags::TOP_OF_PIPE; 17 | let mut dst_stage_mask = vk::PipelineStageFlags::BOTTOM_OF_PIPE; 18 | 19 | // TODO: Optimize out the Vec heap allocations 20 | let mut vk_memory_barriers: Vec = Vec::with_capacity(1); 21 | let mut vk_buffer_barriers: Vec = 22 | Vec::with_capacity(buffer_barriers.len()); 23 | let mut vk_image_barriers: Vec = 24 | Vec::with_capacity(image_barriers.len()); 25 | 26 | // Global memory barrier 27 | if let Some(ref barrier) = global_barrier { 28 | let (src_mask, dst_mask, barrier) = get_memory_barrier(barrier); 29 | src_stage_mask |= src_mask; 30 | dst_stage_mask |= dst_mask; 31 | vk_memory_barriers.push(barrier); 32 | } 33 | 34 | // Buffer memory barriers 35 | for buffer_barrier in buffer_barriers { 36 | let (src_mask, dst_mask, barrier) = get_buffer_memory_barrier(buffer_barrier); 37 | src_stage_mask |= src_mask; 38 | dst_stage_mask |= dst_mask; 39 | vk_buffer_barriers.push(barrier); 40 | } 41 | 42 | // Image memory barriers 43 | for image_barrier in image_barriers { 44 | let (src_mask, dst_mask, barrier) = get_image_memory_barrier(image_barrier); 45 | src_stage_mask |= src_mask; 46 | dst_stage_mask |= dst_mask; 47 | vk_image_barriers.push(barrier); 48 | } 49 | 50 | unsafe { 51 | device.cmd_pipeline_barrier( 52 | command_buffer, 53 | src_stage_mask, 54 | dst_stage_mask, 55 | vk::DependencyFlags::empty(), 56 | &vk_memory_barriers, 57 | &vk_buffer_barriers, 58 | &vk_image_barriers, 59 | ); 60 | } 61 | } 62 | 63 | /// Wrapper around `vkCmdSetEvent`. 64 | /// Sets an event when the accesses defined by `previous_accesses` are completed. 65 | /// `command_buffer` and `event` are passed unmodified to `vkCmdSetEvent`. 66 | pub fn set_event( 67 | device: &ash::Device, 68 | command_buffer: vk::CommandBuffer, 69 | event: vk::Event, 70 | previous_accesses: &[AccessType], 71 | ) { 72 | let mut stage_mask = vk::PipelineStageFlags::TOP_OF_PIPE; 73 | for previous_access in previous_accesses { 74 | let previous_info = get_access_info(*previous_access); 75 | stage_mask |= previous_info.stage_mask; 76 | } 77 | 78 | unsafe { 79 | device.cmd_set_event(command_buffer, event, stage_mask); 80 | } 81 | } 82 | 83 | /// Wrapper around `vkCmdResetEvent`. 84 | /// Resets an event when the accesses defined by `previous_accesses` are completed. 85 | /// `command_buffer` and `event` are passed unmodified to `vkCmdResetEvent`. 86 | pub fn reset_event( 87 | device: &ash::Device, 88 | command_buffer: vk::CommandBuffer, 89 | event: vk::Event, 90 | previous_accesses: &[AccessType], 91 | ) { 92 | let mut stage_mask = vk::PipelineStageFlags::TOP_OF_PIPE; 93 | for previous_access in previous_accesses { 94 | let previous_info = get_access_info(*previous_access); 95 | stage_mask |= previous_info.stage_mask; 96 | } 97 | 98 | unsafe { 99 | device.cmd_reset_event(command_buffer, event, stage_mask); 100 | } 101 | } 102 | 103 | /// Simplified wrapper around `vkCmdWaitEvents`. 104 | /// The mapping functions defined above are used to translate the passed in 105 | /// barrier definitions into a set of pipeline stages and native Vulkan memory 106 | /// barriers to be passed to `vkCmdPipelineBarrier`. 107 | /// 108 | /// `commandBuffer` and `events` are passed unmodified to `vkCmdWaitEvents`. 109 | pub fn wait_events( 110 | device: &ash::Device, 111 | command_buffer: vk::CommandBuffer, 112 | events: &[vk::Event], 113 | global_barrier: Option, 114 | buffer_barriers: &[BufferBarrier], 115 | image_barriers: &[ImageBarrier], 116 | ) { 117 | let mut src_stage_mask = vk::PipelineStageFlags::TOP_OF_PIPE; 118 | let mut dst_stage_mask = vk::PipelineStageFlags::BOTTOM_OF_PIPE; 119 | 120 | // TODO: Optimize out the Vec heap allocations 121 | let mut vk_memory_barriers: Vec = Vec::with_capacity(1); 122 | let mut vk_buffer_barriers: Vec = 123 | Vec::with_capacity(buffer_barriers.len()); 124 | let mut vk_image_barriers: Vec = 125 | Vec::with_capacity(image_barriers.len()); 126 | 127 | // Global memory barrier 128 | if let Some(ref barrier) = global_barrier { 129 | let (src_mask, dst_mask, barrier) = get_memory_barrier(barrier); 130 | src_stage_mask |= src_mask; 131 | dst_stage_mask |= dst_mask; 132 | vk_memory_barriers.push(barrier); 133 | } 134 | 135 | // Buffer memory barriers 136 | for buffer_barrier in buffer_barriers { 137 | let (src_mask, dst_mask, barrier) = get_buffer_memory_barrier(buffer_barrier); 138 | src_stage_mask |= src_mask; 139 | dst_stage_mask |= dst_mask; 140 | vk_buffer_barriers.push(barrier); 141 | } 142 | 143 | // Image memory barriers 144 | for image_barrier in image_barriers { 145 | let (src_mask, dst_mask, barrier) = get_image_memory_barrier(image_barrier); 146 | src_stage_mask |= src_mask; 147 | dst_stage_mask |= dst_mask; 148 | vk_image_barriers.push(barrier); 149 | } 150 | 151 | unsafe { 152 | device.cmd_wait_events( 153 | command_buffer, 154 | &events, 155 | src_stage_mask, 156 | dst_stage_mask, 157 | &vk_memory_barriers, 158 | &vk_buffer_barriers, 159 | &vk_image_barriers, 160 | ); 161 | } 162 | } 163 | -------------------------------------------------------------------------------- /src/lib.rs: -------------------------------------------------------------------------------- 1 | //! In an effort to make Vulkan synchronization more accessible, this library 2 | //! provides a simplification of core synchronization mechanisms such as 3 | //! pipeline barriers and events. 4 | //! 5 | //! Rather than the complex maze of enums and bit flags in Vulkan - many 6 | //! combinations of which are invalid or nonsensical - this library collapses 7 | //! this to a shorter list of distinct usage types, and a couple of options 8 | //! for handling image layouts. 9 | //! 10 | //! Additionally, these usage types provide an easier mapping to other graphics 11 | //! APIs like DirectX 12. 12 | //! 13 | //! Use of other synchronization mechanisms such as semaphores, fences and render 14 | //! passes are not addressed in this library at present. 15 | 16 | use ash::vk; 17 | 18 | pub mod cmd; 19 | 20 | /// Defines all potential resource usages 21 | #[derive(Debug, Copy, Clone, PartialEq)] 22 | pub enum AccessType { 23 | /// No access. Useful primarily for initialization 24 | Nothing, 25 | 26 | /// Command buffer read operation as defined by `NVX_device_generated_commands` 27 | CommandBufferReadNVX, 28 | 29 | /// Read as an indirect buffer for drawing or dispatch 30 | IndirectBuffer, 31 | 32 | /// Read as an index buffer for drawing 33 | IndexBuffer, 34 | 35 | /// Read as a vertex buffer for drawing 36 | VertexBuffer, 37 | 38 | /// Read as a uniform buffer in a vertex shader 39 | VertexShaderReadUniformBuffer, 40 | 41 | /// Read as a sampled image/uniform texel buffer in a vertex shader 42 | VertexShaderReadSampledImageOrUniformTexelBuffer, 43 | 44 | /// Read as any other resource in a vertex shader 45 | VertexShaderReadOther, 46 | 47 | /// Read as a uniform buffer in a tessellation control shader 48 | TessellationControlShaderReadUniformBuffer, 49 | 50 | /// Read as a sampled image/uniform texel buffer in a tessellation control shader 51 | TessellationControlShaderReadSampledImageOrUniformTexelBuffer, 52 | 53 | /// Read as any other resource in a tessellation control shader 54 | TessellationControlShaderReadOther, 55 | 56 | /// Read as a uniform buffer in a tessellation evaluation shader 57 | TessellationEvaluationShaderReadUniformBuffer, 58 | 59 | /// Read as a sampled image/uniform texel buffer in a tessellation evaluation shader 60 | TessellationEvaluationShaderReadSampledImageOrUniformTexelBuffer, 61 | 62 | /// Read as any other resource in a tessellation evaluation shader 63 | TessellationEvaluationShaderReadOther, 64 | 65 | /// Read as a uniform buffer in a geometry shader 66 | GeometryShaderReadUniformBuffer, 67 | 68 | /// Read as a sampled image/uniform texel buffer in a geometry shader 69 | GeometryShaderReadSampledImageOrUniformTexelBuffer, 70 | 71 | /// Read as any other resource in a geometry shader 72 | GeometryShaderReadOther, 73 | 74 | /// Read as a uniform buffer in a fragment shader 75 | FragmentShaderReadUniformBuffer, 76 | 77 | /// Read as a sampled image/uniform texel buffer in a fragment shader 78 | FragmentShaderReadSampledImageOrUniformTexelBuffer, 79 | 80 | /// Read as an input attachment with a color format in a fragment shader 81 | FragmentShaderReadColorInputAttachment, 82 | 83 | /// Read as an input attachment with a depth/stencil format in a fragment shader 84 | FragmentShaderReadDepthStencilInputAttachment, 85 | 86 | /// Read as any other resource in a fragment shader 87 | FragmentShaderReadOther, 88 | 89 | /// Read by blending/logic operations or subpass load operations 90 | ColorAttachmentRead, 91 | 92 | /// Read by depth/stencil tests or subpass load operations 93 | DepthStencilAttachmentRead, 94 | 95 | /// Read as a uniform buffer in a compute shader 96 | ComputeShaderReadUniformBuffer, 97 | 98 | /// Read as a sampled image/uniform texel buffer in a compute shader 99 | ComputeShaderReadSampledImageOrUniformTexelBuffer, 100 | 101 | /// Read as any other resource in a compute shader 102 | ComputeShaderReadOther, 103 | 104 | /// Read as a uniform buffer in any shader 105 | AnyShaderReadUniformBuffer, 106 | 107 | /// Read as a uniform buffer in any shader, or a vertex buffer 108 | AnyShaderReadUniformBufferOrVertexBuffer, 109 | 110 | /// Read as a sampled image in any shader 111 | AnyShaderReadSampledImageOrUniformTexelBuffer, 112 | 113 | /// Read as any other resource (excluding attachments) in any shader 114 | AnyShaderReadOther, 115 | 116 | /// Read as the source of a transfer operation 117 | TransferRead, 118 | 119 | /// Read on the host 120 | HostRead, 121 | 122 | /// Read by the presentation engine (i.e. `vkQueuePresentKHR`) 123 | Present, 124 | 125 | /// Command buffer write operation as defined by `NVX_device_generated_commands` 126 | CommandBufferWriteNVX, 127 | 128 | /// Written as any resource in a vertex shader 129 | VertexShaderWrite, 130 | 131 | /// Written as any resource in a tessellation control shader 132 | TessellationControlShaderWrite, 133 | 134 | /// Written as any resource in a tessellation evaluation shader 135 | TessellationEvaluationShaderWrite, 136 | 137 | /// Written as any resource in a geometry shader 138 | GeometryShaderWrite, 139 | 140 | /// Written as any resource in a fragment shader 141 | FragmentShaderWrite, 142 | 143 | /// Written as a color attachment during rendering, or via a subpass store op 144 | ColorAttachmentWrite, 145 | 146 | /// Written as a depth/stencil attachment during rendering, or via a subpass store op 147 | DepthStencilAttachmentWrite, 148 | 149 | /// Written as a depth aspect of a depth/stencil attachment during rendering, whilst the 150 | /// stencil aspect is read-only. Requires `VK_KHR_maintenance2` to be enabled. 151 | DepthAttachmentWriteStencilReadOnly, 152 | 153 | /// Written as a stencil aspect of a depth/stencil attachment during rendering, whilst the 154 | /// depth aspect is read-only. Requires `VK_KHR_maintenance2` to be enabled. 155 | StencilAttachmentWriteDepthReadOnly, 156 | 157 | /// Written as any resource in a compute shader 158 | ComputeShaderWrite, 159 | 160 | /// Written as any resource in any shader 161 | AnyShaderWrite, 162 | 163 | /// Written as the destination of a transfer operation 164 | TransferWrite, 165 | 166 | /// Written on the host 167 | HostWrite, 168 | 169 | /// Read or written as a color attachment during rendering 170 | ColorAttachmentReadWrite, 171 | 172 | /// Covers any access - useful for debug, generally avoid for performance reasons 173 | General, 174 | 175 | /// Read as a sampled image/uniform texel buffer in a ray tracing shader 176 | RayTracingShaderReadSampledImageOrUniformTexelBuffer, 177 | 178 | /// Read as an input attachment with a color format in a ray tracing shader 179 | RayTracingShaderReadColorInputAttachment, 180 | 181 | /// Read as an input attachment with a depth/stencil format in a ray tracing shader 182 | RayTracingShaderReadDepthStencilInputAttachment, 183 | 184 | /// Read as an acceleration structure in a ray tracing shader 185 | RayTracingShaderReadAccelerationStructure, 186 | 187 | /// Read as any other resource in a ray tracing shader 188 | RayTracingShaderReadOther, 189 | 190 | /// Written as an acceleration structure during acceleration structure building 191 | AccelerationStructureBuildWrite, 192 | 193 | /// Read as an acceleration structure during acceleration structure building (e.g. a BLAS when building a TLAS) 194 | AccelerationStructureBuildRead, 195 | 196 | // Written as a buffer during acceleration structure building (e.g. a staging buffer) 197 | AccelerationStructureBufferWrite, 198 | } 199 | 200 | impl Default for AccessType { 201 | fn default() -> Self { 202 | AccessType::Nothing 203 | } 204 | } 205 | 206 | /// Defines a handful of layout options for images. 207 | /// Rather than a list of all possible image layouts, this reduced list is 208 | /// correlated with the access types to map to the correct Vulkan layouts. 209 | /// `Optimal` is usually preferred. 210 | #[derive(Debug, Copy, Clone, PartialEq)] 211 | pub enum ImageLayout { 212 | /// Choose the most optimal layout for each usage. Performs layout transitions as appropriate for the access. 213 | Optimal, 214 | 215 | /// Layout accessible by all Vulkan access types on a device - no layout transitions except for presentation 216 | General, 217 | 218 | /// Similar to `General`, but also allows presentation engines to access it - no layout transitions. 219 | /// Requires `VK_KHR_shared_presentable_image` to be enabled, and this can only be used for shared presentable 220 | /// images (i.e. single-buffered swap chains). 221 | GeneralAndPresentation, 222 | } 223 | 224 | impl Default for ImageLayout { 225 | fn default() -> Self { 226 | ImageLayout::Optimal 227 | } 228 | } 229 | 230 | /// Global barriers define a set of accesses on multiple resources at once. 231 | /// If a buffer or image doesn't require a queue ownership transfer, or an image 232 | /// doesn't require a layout transition (e.g. you're using one of the 233 | /// `ImageLayout::General*` layouts) then a global barrier should be preferred. 234 | /// 235 | /// Simply define the previous and next access types of resources affected. 236 | #[derive(Debug, Default, Clone)] 237 | pub struct GlobalBarrier<'a> { 238 | pub previous_accesses: &'a [AccessType], 239 | pub next_accesses: &'a [AccessType], 240 | } 241 | 242 | /// Buffer barriers should only be used when a queue family ownership transfer 243 | /// is required - prefer global barriers at all other times. 244 | /// 245 | /// Access types are defined in the same way as for a global memory barrier, but 246 | /// they only affect the buffer range identified by `buffer`, `offset` and `size`, 247 | /// rather than all resources. 248 | /// 249 | /// `src_queue_family_index` and `dst_queue_family_index` will be passed unmodified 250 | /// into a buffer memory barrier. 251 | /// 252 | /// A buffer barrier defining a queue ownership transfer needs to be executed 253 | /// twice - once by a queue in the source queue family, and then once again by a 254 | /// queue in the destination queue family, with a semaphore guaranteeing 255 | /// execution order between them. 256 | #[derive(Debug, Default, Clone)] 257 | pub struct BufferBarrier<'a> { 258 | pub previous_accesses: &'a [AccessType], 259 | pub next_accesses: &'a [AccessType], 260 | pub src_queue_family_index: u32, 261 | pub dst_queue_family_index: u32, 262 | pub buffer: vk::Buffer, 263 | pub offset: usize, 264 | pub size: usize, 265 | } 266 | 267 | /// Image barriers should only be used when a queue family ownership transfer 268 | /// or an image layout transition is required - prefer global barriers at all 269 | /// other times. 270 | /// 271 | /// In general it is better to use image barriers with `ImageLayout::Optimal` 272 | /// than it is to use global barriers with images using either of the 273 | /// `ImageLayout::General*` layouts. 274 | /// 275 | /// Access types are defined in the same way as for a global memory barrier, but 276 | /// they only affect the image subresource range identified by `image` and 277 | /// `range`, rather than all resources. 278 | /// 279 | /// `src_queue_family_index`, `dst_queue_family_index`, `image`, and `range` will 280 | /// be passed unmodified into an image memory barrier. 281 | /// 282 | /// An image barrier defining a queue ownership transfer needs to be executed 283 | /// twice - once by a queue in the source queue family, and then once again by a 284 | /// queue in the destination queue family, with a semaphore guaranteeing 285 | /// execution order between them. 286 | /// 287 | /// If `discard_contents` is set to true, the contents of the image become 288 | /// undefined after the barrier is executed, which can result in a performance 289 | /// boost over attempting to preserve the contents. This is particularly useful 290 | /// for transient images where the contents are going to be immediately overwritten. 291 | /// A good example of when to use this is when an application re-uses a presented 292 | /// image after acquiring the next swap chain image. 293 | #[derive(Debug, Default, Clone)] 294 | pub struct ImageBarrier<'a> { 295 | pub previous_accesses: &'a [AccessType], 296 | pub next_accesses: &'a [AccessType], 297 | pub previous_layout: ImageLayout, 298 | pub next_layout: ImageLayout, 299 | pub discard_contents: bool, 300 | pub src_queue_family_index: u32, 301 | pub dst_queue_family_index: u32, 302 | pub image: vk::Image, 303 | pub range: vk::ImageSubresourceRange, 304 | } 305 | 306 | /// Mapping function that translates a global barrier into a set of source and 307 | /// destination pipeline stages, and a memory barrier, that can be used with 308 | /// Vulkan synchronization methods. 309 | pub fn get_memory_barrier( 310 | barrier: &GlobalBarrier, 311 | ) -> ( 312 | vk::PipelineStageFlags, 313 | vk::PipelineStageFlags, 314 | vk::MemoryBarrier, 315 | ) { 316 | let mut src_stages = vk::PipelineStageFlags::empty(); 317 | let mut dst_stages = vk::PipelineStageFlags::empty(); 318 | 319 | let mut memory_barrier = vk::MemoryBarrier::default(); 320 | 321 | for previous_access in barrier.previous_accesses { 322 | let previous_info = get_access_info(*previous_access); 323 | 324 | src_stages |= previous_info.stage_mask; 325 | 326 | // Add appropriate availability operations - for writes only. 327 | if is_write_access(*previous_access) { 328 | memory_barrier.src_access_mask |= previous_info.access_mask; 329 | } 330 | } 331 | 332 | for next_access in barrier.next_accesses { 333 | let next_info = get_access_info(*next_access); 334 | 335 | dst_stages |= next_info.stage_mask; 336 | 337 | // Add visibility operations as necessary. 338 | // If the src access mask, this is a WAR hazard (or for some reason a "RAR"), 339 | // so the dst access mask can be safely zeroed as these don't need visibility. 340 | if memory_barrier.src_access_mask != vk::AccessFlags::empty() { 341 | memory_barrier.dst_access_mask |= next_info.access_mask; 342 | } 343 | } 344 | 345 | // Ensure that the stage masks are valid if no stages were determined 346 | if src_stages == vk::PipelineStageFlags::empty() { 347 | src_stages = vk::PipelineStageFlags::TOP_OF_PIPE; 348 | } 349 | 350 | if dst_stages == vk::PipelineStageFlags::empty() { 351 | dst_stages = vk::PipelineStageFlags::BOTTOM_OF_PIPE; 352 | } 353 | 354 | (src_stages, dst_stages, memory_barrier) 355 | } 356 | 357 | /// Mapping function that translates a buffer barrier into a set of source and 358 | /// destination pipeline stages, and a buffer memory barrier, that can be used 359 | /// with Vulkan synchronization methods. 360 | pub fn get_buffer_memory_barrier( 361 | barrier: &BufferBarrier, 362 | ) -> ( 363 | vk::PipelineStageFlags, 364 | vk::PipelineStageFlags, 365 | vk::BufferMemoryBarrier, 366 | ) { 367 | let mut src_stages = vk::PipelineStageFlags::empty(); 368 | let mut dst_stages = vk::PipelineStageFlags::empty(); 369 | 370 | let mut buffer_barrier = vk::BufferMemoryBarrier { 371 | src_queue_family_index: barrier.src_queue_family_index, 372 | dst_queue_family_index: barrier.dst_queue_family_index, 373 | buffer: barrier.buffer, 374 | offset: barrier.offset as u64, 375 | size: barrier.size as u64, 376 | ..Default::default() 377 | }; 378 | 379 | for previous_access in barrier.previous_accesses { 380 | let previous_info = get_access_info(*previous_access); 381 | 382 | src_stages |= previous_info.stage_mask; 383 | 384 | // Add appropriate availability operations - for writes only. 385 | if is_write_access(*previous_access) { 386 | buffer_barrier.src_access_mask |= previous_info.access_mask; 387 | } 388 | } 389 | 390 | for next_access in barrier.next_accesses { 391 | let next_info = get_access_info(*next_access); 392 | 393 | dst_stages |= next_info.stage_mask; 394 | 395 | // Add visibility operations as necessary. 396 | // If the src access mask, this is a WAR hazard (or for some reason a "RAR"), 397 | // so the dst access mask can be safely zeroed as these don't need visibility. 398 | if buffer_barrier.src_access_mask != vk::AccessFlags::empty() { 399 | buffer_barrier.dst_access_mask |= next_info.access_mask; 400 | } 401 | } 402 | 403 | // Ensure that the stage masks are valid if no stages were determined 404 | if src_stages == vk::PipelineStageFlags::empty() { 405 | src_stages = vk::PipelineStageFlags::TOP_OF_PIPE; 406 | } 407 | 408 | if dst_stages == vk::PipelineStageFlags::empty() { 409 | dst_stages = vk::PipelineStageFlags::BOTTOM_OF_PIPE; 410 | } 411 | 412 | (src_stages, dst_stages, buffer_barrier) 413 | } 414 | 415 | /// Mapping function that translates an image barrier into a set of source and 416 | /// destination pipeline stages, and an image memory barrier, that can be used 417 | /// with Vulkan synchronization methods. 418 | pub fn get_image_memory_barrier( 419 | barrier: &ImageBarrier, 420 | ) -> ( 421 | vk::PipelineStageFlags, 422 | vk::PipelineStageFlags, 423 | vk::ImageMemoryBarrier, 424 | ) { 425 | let mut src_stages = vk::PipelineStageFlags::empty(); 426 | let mut dst_stages = vk::PipelineStageFlags::empty(); 427 | 428 | let mut image_barrier = vk::ImageMemoryBarrier { 429 | src_queue_family_index: barrier.src_queue_family_index, 430 | dst_queue_family_index: barrier.dst_queue_family_index, 431 | image: barrier.image, 432 | subresource_range: barrier.range, 433 | ..Default::default() 434 | }; 435 | 436 | for previous_access in barrier.previous_accesses { 437 | let previous_info = get_access_info(*previous_access); 438 | 439 | src_stages |= previous_info.stage_mask; 440 | 441 | // Add appropriate availability operations - for writes only. 442 | if is_write_access(*previous_access) { 443 | image_barrier.src_access_mask |= previous_info.access_mask; 444 | } 445 | 446 | if barrier.discard_contents { 447 | image_barrier.old_layout = vk::ImageLayout::UNDEFINED; 448 | } else { 449 | let layout = match barrier.previous_layout { 450 | ImageLayout::General => { 451 | if *previous_access == AccessType::Present { 452 | vk::ImageLayout::PRESENT_SRC_KHR 453 | } else { 454 | vk::ImageLayout::GENERAL 455 | } 456 | } 457 | ImageLayout::Optimal => previous_info.image_layout, 458 | ImageLayout::GeneralAndPresentation => { 459 | unimplemented!() 460 | // TODO: layout = vk::ImageLayout::VK_IMAGE_LAYOUT_SHARED_PRESENT_KHR 461 | } 462 | }; 463 | 464 | image_barrier.old_layout = layout; 465 | } 466 | } 467 | 468 | for next_access in barrier.next_accesses { 469 | let next_info = get_access_info(*next_access); 470 | 471 | dst_stages |= next_info.stage_mask; 472 | 473 | // Add visibility operations as necessary. 474 | // If the src access mask, this is a WAR hazard (or for some reason a "RAR"), 475 | // so the dst access mask can be safely zeroed as these don't need visibility. 476 | if image_barrier.src_access_mask != vk::AccessFlags::empty() { 477 | image_barrier.dst_access_mask |= next_info.access_mask; 478 | } 479 | 480 | let layout = match barrier.next_layout { 481 | ImageLayout::General => { 482 | if *next_access == AccessType::Present { 483 | vk::ImageLayout::PRESENT_SRC_KHR 484 | } else { 485 | vk::ImageLayout::GENERAL 486 | } 487 | } 488 | ImageLayout::Optimal => next_info.image_layout, 489 | ImageLayout::GeneralAndPresentation => { 490 | unimplemented!() 491 | // TODO: layout = vk::ImageLayout::VK_IMAGE_LAYOUT_SHARED_PRESENT_KHR 492 | } 493 | }; 494 | 495 | image_barrier.new_layout = layout; 496 | } 497 | 498 | // Ensure that the stage masks are valid if no stages were determined 499 | if src_stages == vk::PipelineStageFlags::empty() { 500 | src_stages = vk::PipelineStageFlags::TOP_OF_PIPE; 501 | } 502 | 503 | if dst_stages == vk::PipelineStageFlags::empty() { 504 | dst_stages = vk::PipelineStageFlags::BOTTOM_OF_PIPE; 505 | } 506 | 507 | (src_stages, dst_stages, image_barrier) 508 | } 509 | 510 | pub(crate) struct AccessInfo { 511 | pub(crate) stage_mask: vk::PipelineStageFlags, 512 | pub(crate) access_mask: vk::AccessFlags, 513 | pub(crate) image_layout: vk::ImageLayout, 514 | } 515 | 516 | pub(crate) fn get_access_info(access_type: AccessType) -> AccessInfo { 517 | match access_type { 518 | AccessType::Nothing => AccessInfo { 519 | stage_mask: vk::PipelineStageFlags::empty(), 520 | access_mask: vk::AccessFlags::empty(), 521 | image_layout: vk::ImageLayout::UNDEFINED, 522 | }, 523 | AccessType::CommandBufferReadNVX => AccessInfo { 524 | stage_mask: vk::PipelineStageFlags::COMMAND_PREPROCESS_NV, 525 | access_mask: vk::AccessFlags::COMMAND_PREPROCESS_READ_NV, 526 | image_layout: vk::ImageLayout::UNDEFINED, 527 | }, 528 | AccessType::IndirectBuffer => AccessInfo { 529 | stage_mask: vk::PipelineStageFlags::DRAW_INDIRECT, 530 | access_mask: vk::AccessFlags::INDIRECT_COMMAND_READ, 531 | image_layout: vk::ImageLayout::UNDEFINED, 532 | }, 533 | AccessType::IndexBuffer => AccessInfo { 534 | stage_mask: vk::PipelineStageFlags::VERTEX_INPUT, 535 | access_mask: vk::AccessFlags::INDEX_READ, 536 | image_layout: vk::ImageLayout::UNDEFINED, 537 | }, 538 | AccessType::VertexBuffer => AccessInfo { 539 | stage_mask: vk::PipelineStageFlags::VERTEX_INPUT, 540 | access_mask: vk::AccessFlags::VERTEX_ATTRIBUTE_READ, 541 | image_layout: vk::ImageLayout::UNDEFINED, 542 | }, 543 | AccessType::VertexShaderReadUniformBuffer => AccessInfo { 544 | stage_mask: vk::PipelineStageFlags::VERTEX_SHADER, 545 | access_mask: vk::AccessFlags::SHADER_READ, 546 | image_layout: vk::ImageLayout::UNDEFINED, 547 | }, 548 | AccessType::VertexShaderReadSampledImageOrUniformTexelBuffer => AccessInfo { 549 | stage_mask: vk::PipelineStageFlags::VERTEX_SHADER, 550 | access_mask: vk::AccessFlags::SHADER_READ, 551 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 552 | }, 553 | AccessType::VertexShaderReadOther => AccessInfo { 554 | stage_mask: vk::PipelineStageFlags::VERTEX_SHADER, 555 | access_mask: vk::AccessFlags::SHADER_READ, 556 | image_layout: vk::ImageLayout::GENERAL, 557 | }, 558 | AccessType::TessellationControlShaderReadUniformBuffer => AccessInfo { 559 | stage_mask: vk::PipelineStageFlags::TESSELLATION_CONTROL_SHADER, 560 | access_mask: vk::AccessFlags::UNIFORM_READ, 561 | image_layout: vk::ImageLayout::UNDEFINED, 562 | }, 563 | AccessType::TessellationControlShaderReadSampledImageOrUniformTexelBuffer => AccessInfo { 564 | stage_mask: vk::PipelineStageFlags::TESSELLATION_CONTROL_SHADER, 565 | access_mask: vk::AccessFlags::SHADER_READ, 566 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 567 | }, 568 | AccessType::TessellationControlShaderReadOther => AccessInfo { 569 | stage_mask: vk::PipelineStageFlags::TESSELLATION_CONTROL_SHADER, 570 | access_mask: vk::AccessFlags::SHADER_READ, 571 | image_layout: vk::ImageLayout::GENERAL, 572 | }, 573 | AccessType::TessellationEvaluationShaderReadUniformBuffer => AccessInfo { 574 | stage_mask: vk::PipelineStageFlags::TESSELLATION_EVALUATION_SHADER, 575 | access_mask: vk::AccessFlags::UNIFORM_READ, 576 | image_layout: vk::ImageLayout::UNDEFINED, 577 | }, 578 | AccessType::TessellationEvaluationShaderReadSampledImageOrUniformTexelBuffer => { 579 | AccessInfo { 580 | stage_mask: vk::PipelineStageFlags::TESSELLATION_EVALUATION_SHADER, 581 | access_mask: vk::AccessFlags::SHADER_READ, 582 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 583 | } 584 | } 585 | AccessType::TessellationEvaluationShaderReadOther => AccessInfo { 586 | stage_mask: vk::PipelineStageFlags::TESSELLATION_EVALUATION_SHADER, 587 | access_mask: vk::AccessFlags::SHADER_READ, 588 | image_layout: vk::ImageLayout::GENERAL, 589 | }, 590 | AccessType::GeometryShaderReadUniformBuffer => AccessInfo { 591 | stage_mask: vk::PipelineStageFlags::GEOMETRY_SHADER, 592 | access_mask: vk::AccessFlags::UNIFORM_READ, 593 | image_layout: vk::ImageLayout::UNDEFINED, 594 | }, 595 | AccessType::GeometryShaderReadSampledImageOrUniformTexelBuffer => AccessInfo { 596 | stage_mask: vk::PipelineStageFlags::GEOMETRY_SHADER, 597 | access_mask: vk::AccessFlags::SHADER_READ, 598 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 599 | }, 600 | AccessType::GeometryShaderReadOther => AccessInfo { 601 | stage_mask: vk::PipelineStageFlags::GEOMETRY_SHADER, 602 | access_mask: vk::AccessFlags::SHADER_READ, 603 | image_layout: vk::ImageLayout::GENERAL, 604 | }, 605 | AccessType::FragmentShaderReadUniformBuffer => AccessInfo { 606 | stage_mask: vk::PipelineStageFlags::FRAGMENT_SHADER, 607 | access_mask: vk::AccessFlags::UNIFORM_READ, 608 | image_layout: vk::ImageLayout::UNDEFINED, 609 | }, 610 | AccessType::FragmentShaderReadSampledImageOrUniformTexelBuffer => AccessInfo { 611 | stage_mask: vk::PipelineStageFlags::FRAGMENT_SHADER, 612 | access_mask: vk::AccessFlags::SHADER_READ, 613 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 614 | }, 615 | AccessType::FragmentShaderReadColorInputAttachment => AccessInfo { 616 | stage_mask: vk::PipelineStageFlags::FRAGMENT_SHADER, 617 | access_mask: vk::AccessFlags::INPUT_ATTACHMENT_READ, 618 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 619 | }, 620 | AccessType::FragmentShaderReadDepthStencilInputAttachment => AccessInfo { 621 | stage_mask: vk::PipelineStageFlags::FRAGMENT_SHADER, 622 | access_mask: vk::AccessFlags::INPUT_ATTACHMENT_READ, 623 | image_layout: vk::ImageLayout::DEPTH_STENCIL_READ_ONLY_OPTIMAL, 624 | }, 625 | AccessType::FragmentShaderReadOther => AccessInfo { 626 | stage_mask: vk::PipelineStageFlags::FRAGMENT_SHADER, 627 | access_mask: vk::AccessFlags::SHADER_READ, 628 | image_layout: vk::ImageLayout::GENERAL, 629 | }, 630 | AccessType::ColorAttachmentRead => AccessInfo { 631 | stage_mask: vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT, 632 | access_mask: vk::AccessFlags::COLOR_ATTACHMENT_READ, 633 | image_layout: vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL, 634 | }, 635 | AccessType::DepthStencilAttachmentRead => AccessInfo { 636 | stage_mask: vk::PipelineStageFlags::EARLY_FRAGMENT_TESTS 637 | | vk::PipelineStageFlags::LATE_FRAGMENT_TESTS, 638 | access_mask: vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_READ, 639 | image_layout: vk::ImageLayout::DEPTH_STENCIL_READ_ONLY_OPTIMAL, 640 | }, 641 | AccessType::ComputeShaderReadUniformBuffer => AccessInfo { 642 | stage_mask: vk::PipelineStageFlags::COMPUTE_SHADER, 643 | access_mask: vk::AccessFlags::UNIFORM_READ, 644 | image_layout: vk::ImageLayout::UNDEFINED, 645 | }, 646 | AccessType::ComputeShaderReadSampledImageOrUniformTexelBuffer => AccessInfo { 647 | stage_mask: vk::PipelineStageFlags::COMPUTE_SHADER, 648 | access_mask: vk::AccessFlags::SHADER_READ, 649 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 650 | }, 651 | AccessType::ComputeShaderReadOther => AccessInfo { 652 | stage_mask: vk::PipelineStageFlags::COMPUTE_SHADER, 653 | access_mask: vk::AccessFlags::SHADER_READ, 654 | image_layout: vk::ImageLayout::GENERAL, 655 | }, 656 | AccessType::AnyShaderReadUniformBuffer => AccessInfo { 657 | stage_mask: vk::PipelineStageFlags::ALL_COMMANDS, 658 | access_mask: vk::AccessFlags::UNIFORM_READ, 659 | image_layout: vk::ImageLayout::UNDEFINED, 660 | }, 661 | AccessType::AnyShaderReadUniformBufferOrVertexBuffer => AccessInfo { 662 | stage_mask: vk::PipelineStageFlags::ALL_COMMANDS, 663 | access_mask: vk::AccessFlags::UNIFORM_READ | vk::AccessFlags::VERTEX_ATTRIBUTE_READ, 664 | image_layout: vk::ImageLayout::UNDEFINED, 665 | }, 666 | AccessType::AnyShaderReadSampledImageOrUniformTexelBuffer => AccessInfo { 667 | stage_mask: vk::PipelineStageFlags::ALL_COMMANDS, 668 | access_mask: vk::AccessFlags::SHADER_READ, 669 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 670 | }, 671 | AccessType::AnyShaderReadOther => AccessInfo { 672 | stage_mask: vk::PipelineStageFlags::ALL_COMMANDS, 673 | access_mask: vk::AccessFlags::SHADER_READ, 674 | image_layout: vk::ImageLayout::GENERAL, 675 | }, 676 | AccessType::TransferRead => AccessInfo { 677 | stage_mask: vk::PipelineStageFlags::TRANSFER, 678 | access_mask: vk::AccessFlags::TRANSFER_READ, 679 | image_layout: vk::ImageLayout::TRANSFER_SRC_OPTIMAL, 680 | }, 681 | AccessType::HostRead => AccessInfo { 682 | stage_mask: vk::PipelineStageFlags::HOST, 683 | access_mask: vk::AccessFlags::HOST_READ, 684 | image_layout: vk::ImageLayout::GENERAL, 685 | }, 686 | AccessType::Present => AccessInfo { 687 | stage_mask: vk::PipelineStageFlags::empty(), 688 | access_mask: vk::AccessFlags::empty(), 689 | image_layout: vk::ImageLayout::PRESENT_SRC_KHR, 690 | }, 691 | AccessType::CommandBufferWriteNVX => AccessInfo { 692 | stage_mask: vk::PipelineStageFlags::COMMAND_PREPROCESS_NV, 693 | access_mask: vk::AccessFlags::COMMAND_PREPROCESS_WRITE_NV, 694 | image_layout: vk::ImageLayout::UNDEFINED, 695 | }, 696 | AccessType::VertexShaderWrite => AccessInfo { 697 | stage_mask: vk::PipelineStageFlags::VERTEX_SHADER, 698 | access_mask: vk::AccessFlags::SHADER_WRITE, 699 | image_layout: vk::ImageLayout::GENERAL, 700 | }, 701 | AccessType::TessellationControlShaderWrite => AccessInfo { 702 | stage_mask: vk::PipelineStageFlags::TESSELLATION_CONTROL_SHADER, 703 | access_mask: vk::AccessFlags::SHADER_WRITE, 704 | image_layout: vk::ImageLayout::GENERAL, 705 | }, 706 | AccessType::TessellationEvaluationShaderWrite => AccessInfo { 707 | stage_mask: vk::PipelineStageFlags::TESSELLATION_EVALUATION_SHADER, 708 | access_mask: vk::AccessFlags::SHADER_WRITE, 709 | image_layout: vk::ImageLayout::GENERAL, 710 | }, 711 | AccessType::GeometryShaderWrite => AccessInfo { 712 | stage_mask: vk::PipelineStageFlags::GEOMETRY_SHADER, 713 | access_mask: vk::AccessFlags::SHADER_WRITE, 714 | image_layout: vk::ImageLayout::GENERAL, 715 | }, 716 | AccessType::FragmentShaderWrite => AccessInfo { 717 | stage_mask: vk::PipelineStageFlags::FRAGMENT_SHADER, 718 | access_mask: vk::AccessFlags::SHADER_WRITE, 719 | image_layout: vk::ImageLayout::GENERAL, 720 | }, 721 | AccessType::ColorAttachmentWrite => AccessInfo { 722 | stage_mask: vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT, 723 | access_mask: vk::AccessFlags::COLOR_ATTACHMENT_WRITE, 724 | image_layout: vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL, 725 | }, 726 | AccessType::DepthStencilAttachmentWrite => AccessInfo { 727 | stage_mask: vk::PipelineStageFlags::EARLY_FRAGMENT_TESTS 728 | | vk::PipelineStageFlags::LATE_FRAGMENT_TESTS, 729 | access_mask: vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_WRITE, 730 | image_layout: vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL, 731 | }, 732 | AccessType::DepthAttachmentWriteStencilReadOnly => AccessInfo { 733 | stage_mask: vk::PipelineStageFlags::EARLY_FRAGMENT_TESTS 734 | | vk::PipelineStageFlags::LATE_FRAGMENT_TESTS, 735 | access_mask: vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_WRITE 736 | | vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_READ, 737 | image_layout: vk::ImageLayout::DEPTH_ATTACHMENT_STENCIL_READ_ONLY_OPTIMAL, 738 | }, 739 | AccessType::StencilAttachmentWriteDepthReadOnly => AccessInfo { 740 | stage_mask: vk::PipelineStageFlags::EARLY_FRAGMENT_TESTS 741 | | vk::PipelineStageFlags::LATE_FRAGMENT_TESTS, 742 | access_mask: vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_WRITE 743 | | vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_READ, 744 | image_layout: vk::ImageLayout::DEPTH_READ_ONLY_STENCIL_ATTACHMENT_OPTIMAL, 745 | }, 746 | AccessType::ComputeShaderWrite => AccessInfo { 747 | stage_mask: vk::PipelineStageFlags::COMPUTE_SHADER, 748 | access_mask: vk::AccessFlags::SHADER_WRITE, 749 | image_layout: vk::ImageLayout::GENERAL, 750 | }, 751 | AccessType::AnyShaderWrite => AccessInfo { 752 | stage_mask: vk::PipelineStageFlags::ALL_COMMANDS, 753 | access_mask: vk::AccessFlags::SHADER_WRITE, 754 | image_layout: vk::ImageLayout::GENERAL, 755 | }, 756 | AccessType::TransferWrite => AccessInfo { 757 | stage_mask: vk::PipelineStageFlags::TRANSFER, 758 | access_mask: vk::AccessFlags::TRANSFER_WRITE, 759 | image_layout: vk::ImageLayout::TRANSFER_DST_OPTIMAL, 760 | }, 761 | AccessType::HostWrite => AccessInfo { 762 | stage_mask: vk::PipelineStageFlags::HOST, 763 | access_mask: vk::AccessFlags::HOST_WRITE, 764 | image_layout: vk::ImageLayout::GENERAL, 765 | }, 766 | AccessType::ColorAttachmentReadWrite => AccessInfo { 767 | stage_mask: vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT, 768 | access_mask: vk::AccessFlags::COLOR_ATTACHMENT_READ 769 | | vk::AccessFlags::COLOR_ATTACHMENT_WRITE, 770 | image_layout: vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL, 771 | }, 772 | AccessType::General => AccessInfo { 773 | stage_mask: vk::PipelineStageFlags::ALL_COMMANDS, 774 | access_mask: vk::AccessFlags::MEMORY_READ | vk::AccessFlags::MEMORY_WRITE, 775 | image_layout: vk::ImageLayout::GENERAL, 776 | }, 777 | AccessType::RayTracingShaderReadSampledImageOrUniformTexelBuffer => AccessInfo { 778 | stage_mask: vk::PipelineStageFlags::RAY_TRACING_SHADER_KHR, 779 | access_mask: vk::AccessFlags::SHADER_READ, 780 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 781 | }, 782 | AccessType::RayTracingShaderReadColorInputAttachment => AccessInfo { 783 | stage_mask: vk::PipelineStageFlags::RAY_TRACING_SHADER_KHR, 784 | access_mask: vk::AccessFlags::INPUT_ATTACHMENT_READ, 785 | image_layout: vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL, 786 | }, 787 | AccessType::RayTracingShaderReadDepthStencilInputAttachment => AccessInfo { 788 | stage_mask: vk::PipelineStageFlags::RAY_TRACING_SHADER_KHR, 789 | access_mask: vk::AccessFlags::INPUT_ATTACHMENT_READ, 790 | image_layout: vk::ImageLayout::DEPTH_STENCIL_READ_ONLY_OPTIMAL, 791 | }, 792 | AccessType::RayTracingShaderReadAccelerationStructure => AccessInfo { 793 | stage_mask: vk::PipelineStageFlags::RAY_TRACING_SHADER_KHR, 794 | access_mask: vk::AccessFlags::ACCELERATION_STRUCTURE_READ_KHR, 795 | image_layout: vk::ImageLayout::UNDEFINED, 796 | }, 797 | AccessType::RayTracingShaderReadOther => AccessInfo { 798 | stage_mask: vk::PipelineStageFlags::RAY_TRACING_SHADER_KHR, 799 | access_mask: vk::AccessFlags::SHADER_READ, 800 | image_layout: vk::ImageLayout::GENERAL, 801 | }, 802 | AccessType::AccelerationStructureBuildWrite => AccessInfo { 803 | stage_mask: vk::PipelineStageFlags::ACCELERATION_STRUCTURE_BUILD_KHR, 804 | access_mask: vk::AccessFlags::ACCELERATION_STRUCTURE_WRITE_KHR, 805 | image_layout: vk::ImageLayout::UNDEFINED, 806 | }, 807 | AccessType::AccelerationStructureBuildRead => AccessInfo { 808 | stage_mask: vk::PipelineStageFlags::ACCELERATION_STRUCTURE_BUILD_KHR, 809 | access_mask: vk::AccessFlags::ACCELERATION_STRUCTURE_READ_KHR, 810 | image_layout: vk::ImageLayout::UNDEFINED, 811 | }, 812 | AccessType::AccelerationStructureBufferWrite => AccessInfo { 813 | stage_mask: vk::PipelineStageFlags::ACCELERATION_STRUCTURE_BUILD_KHR, 814 | access_mask: vk::AccessFlags::TRANSFER_WRITE, 815 | image_layout: vk::ImageLayout::UNDEFINED, 816 | }, 817 | } 818 | } 819 | 820 | pub(crate) fn is_write_access(access_type: AccessType) -> bool { 821 | match access_type { 822 | AccessType::CommandBufferWriteNVX => true, 823 | AccessType::VertexShaderWrite => true, 824 | AccessType::TessellationControlShaderWrite => true, 825 | AccessType::TessellationEvaluationShaderWrite => true, 826 | AccessType::GeometryShaderWrite => true, 827 | AccessType::FragmentShaderWrite => true, 828 | AccessType::ColorAttachmentWrite => true, 829 | AccessType::DepthStencilAttachmentWrite => true, 830 | AccessType::DepthAttachmentWriteStencilReadOnly => true, 831 | AccessType::StencilAttachmentWriteDepthReadOnly => true, 832 | AccessType::ComputeShaderWrite => true, 833 | AccessType::AnyShaderWrite => true, 834 | AccessType::TransferWrite => true, 835 | AccessType::HostWrite => true, 836 | AccessType::ColorAttachmentReadWrite => true, 837 | AccessType::General => true, 838 | _ => false, 839 | } 840 | } 841 | -------------------------------------------------------------------------------- /tests/global.rs: -------------------------------------------------------------------------------- 1 | //! Tests are based on the common synchronization examples on the Vulkan-Docs wiki: https://github.com/KhronosGroup/Vulkan-Docs/wiki/Synchronization-Examples. 2 | 3 | use ash::vk; 4 | 5 | #[test] 6 | fn compute_write_storage_compute_read_storage() { 7 | // Compute write to storage buffer/image, Compute read from storage buffer/image 8 | let global_barrier = vk_sync::GlobalBarrier { 9 | previous_accesses: &[vk_sync::AccessType::ComputeShaderWrite], 10 | next_accesses: &[vk_sync::AccessType::ComputeShaderReadOther], 11 | }; 12 | 13 | let (src_mask, dst_mask, barrier) = vk_sync::get_memory_barrier(&global_barrier); 14 | 15 | assert_eq!(src_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 16 | assert_eq!(dst_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 17 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::SHADER_WRITE); 18 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::SHADER_READ); 19 | } 20 | 21 | #[test] 22 | fn compute_read_storage_compute_write_storage() { 23 | // Compute read from storage buffer, Compute write from storage buffer 24 | let global_barrier = vk_sync::GlobalBarrier { 25 | previous_accesses: &[vk_sync::AccessType::ComputeShaderWrite], 26 | next_accesses: &[vk_sync::AccessType::ComputeShaderReadOther], 27 | }; 28 | 29 | let (src_mask, dst_mask, barrier) = vk_sync::get_memory_barrier(&global_barrier); 30 | 31 | assert_eq!(src_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 32 | assert_eq!(dst_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 33 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::SHADER_WRITE); 34 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::SHADER_READ); 35 | } 36 | 37 | #[test] 38 | fn compute_write_storage_graphics_read_index() { 39 | // Compute write to storage buffer, Graphics read as index buffer 40 | let global_barrier = vk_sync::GlobalBarrier { 41 | previous_accesses: &[vk_sync::AccessType::ComputeShaderWrite], 42 | next_accesses: &[vk_sync::AccessType::IndexBuffer], 43 | }; 44 | 45 | let (src_mask, dst_mask, barrier) = vk_sync::get_memory_barrier(&global_barrier); 46 | 47 | assert_eq!(src_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 48 | assert_eq!(dst_mask, vk::PipelineStageFlags::VERTEX_INPUT); 49 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::SHADER_WRITE); 50 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::INDEX_READ); 51 | } 52 | 53 | #[test] 54 | fn compute_write_storage_graphics_read_indirect() { 55 | // Compute write to storage buffer, Graphics read as indirect buffer 56 | let global_barrier = vk_sync::GlobalBarrier { 57 | previous_accesses: &[vk_sync::AccessType::ComputeShaderWrite], 58 | next_accesses: &[vk_sync::AccessType::IndirectBuffer], 59 | }; 60 | 61 | let (src_mask, dst_mask, barrier) = vk_sync::get_memory_barrier(&global_barrier); 62 | 63 | assert_eq!(src_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 64 | assert_eq!(dst_mask, vk::PipelineStageFlags::DRAW_INDIRECT); 65 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::SHADER_WRITE); 66 | assert_eq!( 67 | barrier.dst_access_mask, 68 | vk::AccessFlags::INDIRECT_COMMAND_READ 69 | ); 70 | } 71 | 72 | #[test] 73 | fn nothing_transfer_read() { 74 | // None, Transfer read from buffer 75 | let global_barrier = vk_sync::GlobalBarrier { 76 | previous_accesses: &[vk_sync::AccessType::Nothing], 77 | next_accesses: &[vk_sync::AccessType::TransferRead], 78 | }; 79 | 80 | let (src_mask, dst_mask, barrier) = vk_sync::get_memory_barrier(&global_barrier); 81 | 82 | assert_eq!(src_mask, vk::PipelineStageFlags::TOP_OF_PIPE); 83 | assert_eq!(dst_mask, vk::PipelineStageFlags::TRANSFER); 84 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::empty()); 85 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::empty()); 86 | } 87 | 88 | #[test] 89 | fn transfer_write_graphics_read_vertex() { 90 | // Transfer write to buffer, Graphics read from vertex buffer 91 | let global_barrier = vk_sync::GlobalBarrier { 92 | previous_accesses: &[vk_sync::AccessType::TransferWrite], 93 | next_accesses: &[vk_sync::AccessType::VertexBuffer], 94 | }; 95 | 96 | let (src_mask, dst_mask, barrier) = vk_sync::get_memory_barrier(&global_barrier); 97 | 98 | assert_eq!(src_mask, vk::PipelineStageFlags::TRANSFER); 99 | assert_eq!(dst_mask, vk::PipelineStageFlags::VERTEX_INPUT); 100 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::TRANSFER_WRITE); 101 | assert_eq!( 102 | barrier.dst_access_mask, 103 | vk::AccessFlags::VERTEX_ATTRIBUTE_READ 104 | ); 105 | } 106 | 107 | #[test] 108 | fn full_pipeline_barrier() { 109 | // Full pipeline barrier 110 | let global_barrier = vk_sync::GlobalBarrier { 111 | previous_accesses: &[vk_sync::AccessType::General], 112 | next_accesses: &[vk_sync::AccessType::General], 113 | }; 114 | 115 | let (src_mask, dst_mask, barrier) = vk_sync::get_memory_barrier(&global_barrier); 116 | 117 | assert_eq!(src_mask, vk::PipelineStageFlags::ALL_COMMANDS); 118 | assert_eq!(dst_mask, vk::PipelineStageFlags::ALL_COMMANDS); 119 | assert_eq!( 120 | barrier.src_access_mask, 121 | vk::AccessFlags::MEMORY_READ | vk::AccessFlags::MEMORY_WRITE 122 | ); 123 | assert_eq!( 124 | barrier.dst_access_mask, 125 | vk::AccessFlags::MEMORY_READ | vk::AccessFlags::MEMORY_WRITE 126 | ); 127 | } 128 | 129 | #[test] 130 | fn compute_write_storage_graphics_read_index_compute_read_uniform() { 131 | // Compute write to storage buffer, Graphics read as index buffer & Compute read as uniform buffer 132 | let global_barrier = vk_sync::GlobalBarrier { 133 | previous_accesses: &[vk_sync::AccessType::ComputeShaderWrite], 134 | next_accesses: &[ 135 | vk_sync::AccessType::IndexBuffer, 136 | vk_sync::AccessType::ComputeShaderReadUniformBuffer, 137 | ], 138 | }; 139 | 140 | let (src_mask, dst_mask, barrier) = vk_sync::get_memory_barrier(&global_barrier); 141 | 142 | assert_eq!(src_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 143 | assert_eq!( 144 | dst_mask, 145 | vk::PipelineStageFlags::VERTEX_INPUT | vk::PipelineStageFlags::COMPUTE_SHADER 146 | ); 147 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::SHADER_WRITE); 148 | assert_eq!( 149 | barrier.dst_access_mask, 150 | vk::AccessFlags::INDEX_READ | vk::AccessFlags::UNIFORM_READ 151 | ); 152 | } 153 | 154 | #[test] 155 | fn compute_write_texel_graphics_read_indirect_fragment_read_uniform() { 156 | // Compute write to storage texel buffer, Graphics read as indirect buffer & fragment read as uniform buffer 157 | let global_barrier = vk_sync::GlobalBarrier { 158 | previous_accesses: &[vk_sync::AccessType::ComputeShaderWrite], 159 | next_accesses: &[ 160 | vk_sync::AccessType::IndirectBuffer, 161 | vk_sync::AccessType::FragmentShaderReadUniformBuffer, 162 | ], 163 | }; 164 | 165 | let (src_mask, dst_mask, barrier) = vk_sync::get_memory_barrier(&global_barrier); 166 | 167 | assert_eq!(src_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 168 | assert_eq!( 169 | dst_mask, 170 | vk::PipelineStageFlags::DRAW_INDIRECT | vk::PipelineStageFlags::FRAGMENT_SHADER 171 | ); 172 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::SHADER_WRITE); 173 | assert_eq!( 174 | barrier.dst_access_mask, 175 | vk::AccessFlags::INDIRECT_COMMAND_READ | vk::AccessFlags::UNIFORM_READ 176 | ); 177 | } 178 | -------------------------------------------------------------------------------- /tests/image.rs: -------------------------------------------------------------------------------- 1 | //! Tests are based on the common synchronization examples on the Vulkan-Docs wiki: https://github.com/KhronosGroup/Vulkan-Docs/wiki/Synchronization-Examples. 2 | 3 | use ash::vk; 4 | 5 | #[test] 6 | fn compute_write_storage_fragment_read_sampled() { 7 | // Compute write to storage image, Graphics fragment read as sampled image 8 | let image_barrier = vk_sync::ImageBarrier { 9 | previous_accesses: &[vk_sync::AccessType::ComputeShaderWrite], 10 | next_accesses: &[vk_sync::AccessType::FragmentShaderReadSampledImageOrUniformTexelBuffer], 11 | previous_layout: vk_sync::ImageLayout::Optimal, 12 | next_layout: vk_sync::ImageLayout::Optimal, 13 | discard_contents: false, 14 | src_queue_family_index: 0, 15 | dst_queue_family_index: 0, 16 | image: vk::Image::null(), 17 | range: vk::ImageSubresourceRange { 18 | aspect_mask: vk::ImageAspectFlags::empty(), 19 | base_mip_level: 0, 20 | level_count: 1, 21 | base_array_layer: 0, 22 | layer_count: 1, 23 | }, 24 | }; 25 | 26 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 27 | 28 | assert_eq!(src_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 29 | assert_eq!(dst_mask, vk::PipelineStageFlags::FRAGMENT_SHADER); 30 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::SHADER_WRITE); 31 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::SHADER_READ); 32 | assert_eq!(barrier.old_layout, vk::ImageLayout::GENERAL); 33 | assert_eq!( 34 | barrier.new_layout, 35 | vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL 36 | ); 37 | } 38 | 39 | #[test] 40 | fn graphics_write_color_compute_read_sampled() { 41 | // Graphics write to color attachment, Compute read from sampled image 42 | let image_barrier = vk_sync::ImageBarrier { 43 | previous_accesses: &[vk_sync::AccessType::ColorAttachmentWrite], 44 | next_accesses: &[vk_sync::AccessType::ComputeShaderReadSampledImageOrUniformTexelBuffer], 45 | previous_layout: vk_sync::ImageLayout::Optimal, 46 | next_layout: vk_sync::ImageLayout::Optimal, 47 | discard_contents: false, 48 | src_queue_family_index: 0, 49 | dst_queue_family_index: 0, 50 | image: vk::Image::null(), 51 | range: vk::ImageSubresourceRange { 52 | aspect_mask: vk::ImageAspectFlags::empty(), 53 | base_mip_level: 0, 54 | level_count: 1, 55 | base_array_layer: 0, 56 | layer_count: 1, 57 | }, 58 | }; 59 | 60 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 61 | 62 | assert_eq!(src_mask, vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT); 63 | assert_eq!(dst_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 64 | assert_eq!( 65 | barrier.src_access_mask, 66 | vk::AccessFlags::COLOR_ATTACHMENT_WRITE 67 | ); 68 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::SHADER_READ); 69 | assert_eq!( 70 | barrier.old_layout, 71 | vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL 72 | ); 73 | assert_eq!( 74 | barrier.new_layout, 75 | vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL 76 | ); 77 | } 78 | 79 | #[test] 80 | fn graphics_write_depth_compute_read_sampled() { 81 | // Graphics write to color attachment, Compute read from sampled image 82 | let image_barrier = vk_sync::ImageBarrier { 83 | previous_accesses: &[vk_sync::AccessType::DepthStencilAttachmentWrite], 84 | next_accesses: &[vk_sync::AccessType::ComputeShaderReadSampledImageOrUniformTexelBuffer], 85 | previous_layout: vk_sync::ImageLayout::Optimal, 86 | next_layout: vk_sync::ImageLayout::Optimal, 87 | discard_contents: false, 88 | src_queue_family_index: 0, 89 | dst_queue_family_index: 0, 90 | image: vk::Image::null(), 91 | range: vk::ImageSubresourceRange { 92 | aspect_mask: vk::ImageAspectFlags::empty(), 93 | base_mip_level: 0, 94 | level_count: 1, 95 | base_array_layer: 0, 96 | layer_count: 1, 97 | }, 98 | }; 99 | 100 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 101 | 102 | assert_eq!( 103 | src_mask, 104 | vk::PipelineStageFlags::EARLY_FRAGMENT_TESTS | vk::PipelineStageFlags::LATE_FRAGMENT_TESTS 105 | ); 106 | assert_eq!(dst_mask, vk::PipelineStageFlags::COMPUTE_SHADER); 107 | assert_eq!( 108 | barrier.src_access_mask, 109 | vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_WRITE 110 | ); 111 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::SHADER_READ); 112 | assert_eq!( 113 | barrier.old_layout, 114 | vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL 115 | ); 116 | assert_eq!( 117 | barrier.new_layout, 118 | vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL 119 | ); 120 | } 121 | 122 | #[test] 123 | fn graphics_write_depth_fragment_read_attachment() { 124 | // Graphics write to depth attachment, Graphics fragment read from input attachment 125 | let image_barrier = vk_sync::ImageBarrier { 126 | previous_accesses: &[vk_sync::AccessType::DepthStencilAttachmentWrite], 127 | next_accesses: &[vk_sync::AccessType::FragmentShaderReadDepthStencilInputAttachment], 128 | previous_layout: vk_sync::ImageLayout::Optimal, 129 | next_layout: vk_sync::ImageLayout::Optimal, 130 | discard_contents: false, 131 | src_queue_family_index: 0, 132 | dst_queue_family_index: 0, 133 | image: vk::Image::null(), 134 | range: vk::ImageSubresourceRange { 135 | aspect_mask: vk::ImageAspectFlags::empty(), 136 | base_mip_level: 0, 137 | level_count: 1, 138 | base_array_layer: 0, 139 | layer_count: 1, 140 | }, 141 | }; 142 | 143 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 144 | 145 | assert_eq!( 146 | src_mask, 147 | vk::PipelineStageFlags::EARLY_FRAGMENT_TESTS | vk::PipelineStageFlags::LATE_FRAGMENT_TESTS 148 | ); 149 | assert_eq!(dst_mask, vk::PipelineStageFlags::FRAGMENT_SHADER); 150 | assert_eq!( 151 | barrier.src_access_mask, 152 | vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_WRITE 153 | ); 154 | assert_eq!( 155 | barrier.dst_access_mask, 156 | vk::AccessFlags::INPUT_ATTACHMENT_READ 157 | ); 158 | assert_eq!( 159 | barrier.old_layout, 160 | vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL 161 | ); 162 | assert_eq!( 163 | barrier.new_layout, 164 | vk::ImageLayout::DEPTH_STENCIL_READ_ONLY_OPTIMAL 165 | ); 166 | } 167 | 168 | #[test] 169 | fn graphics_write_depth_fragment_read_sampled() { 170 | // Graphics write to depth attachment, Graphics fragment read from sampled image 171 | let image_barrier = vk_sync::ImageBarrier { 172 | previous_accesses: &[vk_sync::AccessType::DepthStencilAttachmentWrite], 173 | next_accesses: &[vk_sync::AccessType::FragmentShaderReadSampledImageOrUniformTexelBuffer], 174 | previous_layout: vk_sync::ImageLayout::Optimal, 175 | next_layout: vk_sync::ImageLayout::Optimal, 176 | discard_contents: false, 177 | src_queue_family_index: 0, 178 | dst_queue_family_index: 0, 179 | image: vk::Image::null(), 180 | range: vk::ImageSubresourceRange { 181 | aspect_mask: vk::ImageAspectFlags::empty(), 182 | base_mip_level: 0, 183 | level_count: 1, 184 | base_array_layer: 0, 185 | layer_count: 1, 186 | }, 187 | }; 188 | 189 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 190 | 191 | assert_eq!( 192 | src_mask, 193 | vk::PipelineStageFlags::EARLY_FRAGMENT_TESTS | vk::PipelineStageFlags::LATE_FRAGMENT_TESTS 194 | ); 195 | assert_eq!(dst_mask, vk::PipelineStageFlags::FRAGMENT_SHADER); 196 | assert_eq!( 197 | barrier.src_access_mask, 198 | vk::AccessFlags::DEPTH_STENCIL_ATTACHMENT_WRITE 199 | ); 200 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::SHADER_READ); 201 | assert_eq!( 202 | barrier.old_layout, 203 | vk::ImageLayout::DEPTH_STENCIL_ATTACHMENT_OPTIMAL 204 | ); 205 | assert_eq!( 206 | barrier.new_layout, 207 | vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL 208 | ); 209 | } 210 | 211 | #[test] 212 | fn graphics_write_color_fragment_read_attachment() { 213 | // Graphics write to color attachment, Graphics fragment read from input attachment 214 | let image_barrier = vk_sync::ImageBarrier { 215 | previous_accesses: &[vk_sync::AccessType::ColorAttachmentWrite], 216 | next_accesses: &[vk_sync::AccessType::FragmentShaderReadColorInputAttachment], 217 | previous_layout: vk_sync::ImageLayout::Optimal, 218 | next_layout: vk_sync::ImageLayout::Optimal, 219 | discard_contents: false, 220 | src_queue_family_index: 0, 221 | dst_queue_family_index: 0, 222 | image: vk::Image::null(), 223 | range: vk::ImageSubresourceRange { 224 | aspect_mask: vk::ImageAspectFlags::empty(), 225 | base_mip_level: 0, 226 | level_count: 1, 227 | base_array_layer: 0, 228 | layer_count: 1, 229 | }, 230 | }; 231 | 232 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 233 | 234 | assert_eq!(src_mask, vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT); 235 | assert_eq!(dst_mask, vk::PipelineStageFlags::FRAGMENT_SHADER); 236 | assert_eq!( 237 | barrier.src_access_mask, 238 | vk::AccessFlags::COLOR_ATTACHMENT_WRITE 239 | ); 240 | assert_eq!( 241 | barrier.dst_access_mask, 242 | vk::AccessFlags::INPUT_ATTACHMENT_READ 243 | ); 244 | assert_eq!( 245 | barrier.old_layout, 246 | vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL 247 | ); 248 | assert_eq!( 249 | barrier.new_layout, 250 | vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL 251 | ); 252 | } 253 | 254 | #[test] 255 | fn graphics_write_color_fragment_read_sampled() { 256 | // Graphics write to color attachment, Graphics fragment read from input attachment 257 | let image_barrier = vk_sync::ImageBarrier { 258 | previous_accesses: &[vk_sync::AccessType::ColorAttachmentWrite], 259 | next_accesses: &[vk_sync::AccessType::FragmentShaderReadSampledImageOrUniformTexelBuffer], 260 | previous_layout: vk_sync::ImageLayout::Optimal, 261 | next_layout: vk_sync::ImageLayout::Optimal, 262 | discard_contents: false, 263 | src_queue_family_index: 0, 264 | dst_queue_family_index: 0, 265 | image: vk::Image::null(), 266 | range: vk::ImageSubresourceRange { 267 | aspect_mask: vk::ImageAspectFlags::empty(), 268 | base_mip_level: 0, 269 | level_count: 1, 270 | base_array_layer: 0, 271 | layer_count: 1, 272 | }, 273 | }; 274 | 275 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 276 | 277 | assert_eq!(src_mask, vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT); 278 | assert_eq!(dst_mask, vk::PipelineStageFlags::FRAGMENT_SHADER); 279 | assert_eq!( 280 | barrier.src_access_mask, 281 | vk::AccessFlags::COLOR_ATTACHMENT_WRITE 282 | ); 283 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::SHADER_READ); 284 | assert_eq!( 285 | barrier.old_layout, 286 | vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL 287 | ); 288 | assert_eq!( 289 | barrier.new_layout, 290 | vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL 291 | ); 292 | } 293 | 294 | #[test] 295 | fn graphics_write_color_vertex_read_sampled() { 296 | // Graphics write to color attachment, Graphics vertex read from sampled image 297 | let image_barrier = vk_sync::ImageBarrier { 298 | previous_accesses: &[vk_sync::AccessType::ColorAttachmentWrite], 299 | next_accesses: &[vk_sync::AccessType::VertexShaderReadSampledImageOrUniformTexelBuffer], 300 | previous_layout: vk_sync::ImageLayout::Optimal, 301 | next_layout: vk_sync::ImageLayout::Optimal, 302 | discard_contents: false, 303 | src_queue_family_index: 0, 304 | dst_queue_family_index: 0, 305 | image: vk::Image::null(), 306 | range: vk::ImageSubresourceRange { 307 | aspect_mask: vk::ImageAspectFlags::empty(), 308 | base_mip_level: 0, 309 | level_count: 1, 310 | base_array_layer: 0, 311 | layer_count: 1, 312 | }, 313 | }; 314 | 315 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 316 | 317 | assert_eq!(src_mask, vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT); 318 | assert_eq!(dst_mask, vk::PipelineStageFlags::VERTEX_SHADER); 319 | assert_eq!( 320 | barrier.src_access_mask, 321 | vk::AccessFlags::COLOR_ATTACHMENT_WRITE 322 | ); 323 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::SHADER_READ); 324 | assert_eq!( 325 | barrier.old_layout, 326 | vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL 327 | ); 328 | assert_eq!( 329 | barrier.new_layout, 330 | vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL 331 | ); 332 | } 333 | 334 | #[test] 335 | fn graphics_read_sampled_graphics_write_color() { 336 | // Graphics fragment read from sampled image, Graphics write to color attachment 337 | let image_barrier = vk_sync::ImageBarrier { 338 | previous_accesses: &[ 339 | vk_sync::AccessType::FragmentShaderReadSampledImageOrUniformTexelBuffer, 340 | ], 341 | next_accesses: &[vk_sync::AccessType::ColorAttachmentWrite], 342 | previous_layout: vk_sync::ImageLayout::Optimal, 343 | next_layout: vk_sync::ImageLayout::Optimal, 344 | discard_contents: false, 345 | src_queue_family_index: 0, 346 | dst_queue_family_index: 0, 347 | image: vk::Image::null(), 348 | range: vk::ImageSubresourceRange { 349 | aspect_mask: vk::ImageAspectFlags::empty(), 350 | base_mip_level: 0, 351 | level_count: 1, 352 | base_array_layer: 0, 353 | layer_count: 1, 354 | }, 355 | }; 356 | 357 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 358 | 359 | assert_eq!(src_mask, vk::PipelineStageFlags::FRAGMENT_SHADER); 360 | assert_eq!(dst_mask, vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT); 361 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::empty()); 362 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::empty()); 363 | assert_eq!( 364 | barrier.old_layout, 365 | vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL 366 | ); 367 | assert_eq!( 368 | barrier.new_layout, 369 | vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL 370 | ); 371 | } 372 | 373 | #[test] 374 | fn transfer_write_image_fragment_read_sampled() { 375 | // Transfer write to image, Graphics fragment read from sampled image 376 | let image_barrier = vk_sync::ImageBarrier { 377 | previous_accesses: &[vk_sync::AccessType::TransferWrite], 378 | next_accesses: &[vk_sync::AccessType::FragmentShaderReadSampledImageOrUniformTexelBuffer], 379 | previous_layout: vk_sync::ImageLayout::Optimal, 380 | next_layout: vk_sync::ImageLayout::Optimal, 381 | discard_contents: false, 382 | src_queue_family_index: 0, 383 | dst_queue_family_index: 0, 384 | image: vk::Image::null(), 385 | range: vk::ImageSubresourceRange { 386 | aspect_mask: vk::ImageAspectFlags::empty(), 387 | base_mip_level: 0, 388 | level_count: 1, 389 | base_array_layer: 0, 390 | layer_count: 1, 391 | }, 392 | }; 393 | 394 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 395 | 396 | assert_eq!(src_mask, vk::PipelineStageFlags::TRANSFER); 397 | assert_eq!(dst_mask, vk::PipelineStageFlags::FRAGMENT_SHADER); 398 | assert_eq!(barrier.src_access_mask, vk::AccessFlags::TRANSFER_WRITE); 399 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::SHADER_READ); 400 | assert_eq!(barrier.old_layout, vk::ImageLayout::TRANSFER_DST_OPTIMAL); 401 | assert_eq!( 402 | barrier.new_layout, 403 | vk::ImageLayout::SHADER_READ_ONLY_OPTIMAL 404 | ); 405 | } 406 | 407 | #[test] 408 | fn graphics_write_color_presentation() { 409 | // Graphics color attachment write, Presentation 410 | let image_barrier = vk_sync::ImageBarrier { 411 | previous_accesses: &[vk_sync::AccessType::ColorAttachmentWrite], 412 | next_accesses: &[vk_sync::AccessType::Present], 413 | previous_layout: vk_sync::ImageLayout::Optimal, 414 | next_layout: vk_sync::ImageLayout::Optimal, 415 | discard_contents: false, 416 | src_queue_family_index: 0, 417 | dst_queue_family_index: 0, 418 | image: vk::Image::null(), 419 | range: vk::ImageSubresourceRange { 420 | aspect_mask: vk::ImageAspectFlags::empty(), 421 | base_mip_level: 0, 422 | level_count: 1, 423 | base_array_layer: 0, 424 | layer_count: 1, 425 | }, 426 | }; 427 | 428 | let (src_mask, dst_mask, barrier) = vk_sync::get_image_memory_barrier(&image_barrier); 429 | 430 | assert_eq!(src_mask, vk::PipelineStageFlags::COLOR_ATTACHMENT_OUTPUT); 431 | assert_eq!(dst_mask, vk::PipelineStageFlags::BOTTOM_OF_PIPE); 432 | assert_eq!( 433 | barrier.src_access_mask, 434 | vk::AccessFlags::COLOR_ATTACHMENT_WRITE 435 | ); 436 | assert_eq!(barrier.dst_access_mask, vk::AccessFlags::empty()); 437 | assert_eq!( 438 | barrier.old_layout, 439 | vk::ImageLayout::COLOR_ATTACHMENT_OPTIMAL 440 | ); 441 | assert_eq!(barrier.new_layout, vk::ImageLayout::PRESENT_SRC_KHR); 442 | } 443 | --------------------------------------------------------------------------------