Giter Site home page Giter Site logo

gfx-rs / wgpu Goto Github PK

View Code? Open in Web Editor NEW
11.0K 105.0 825.0 50 MB

Cross-platform, safe, pure-rust graphics api.

Home Page: https://wgpu.rs

License: Apache License 2.0

Rust 76.20% GLSL 1.48% JavaScript 2.21% HTML 0.01% WGSL 18.67% HLSL 1.41% Nix 0.02%
webgpu rust gpu metal opengl vulkan hacktoberfest d3d12

wgpu's Introduction

Matrix room gfx-hal on crates.io Build Status
Getting Started | Documentation | Blog | Funding

gfx-rs

gfx-rs is a low-level, cross-platform graphics and compute abstraction library in Rust. It consists of the following components:

gfx-hal deprecation

As of the v0.9 release, gfx-hal is now in maintenance mode. gfx-hal development was mainly driven by wgpu, which has now switched to its own GPU abstraction called wgpu-hal. For this reason, gfx-hal development has switched to maintenance only, until the developers figure out the story for gfx-portability. Read more about the transition in #3768.

hal

  • gfx-hal which is gfx's hardware abstraction layer: a Vulkan-ic mostly unsafe API which translates to native graphics backends.
  • gfx-backend-* which contains graphics backends for various platforms:
  • gfx-warden which is a data-driven reference test framework, used to verify consistency across all graphics backends.

gfx-rs is hard to use, it's recommended for performance-sensitive libraries and engines. If that's not your domain, take a look at wgpu-rs for a safe and simple alternative.

Hardware Abstraction Layer

The Hardware Abstraction Layer (HAL), is a thin, low-level graphics and compute layer which translates API calls to various backends, which allows for cross-platform support. The API of this layer is based on the Vulkan API, adapted to be more Rust-friendly.

Hardware Abstraction Layer (HAL)

Currently HAL has backends for Vulkan, DirectX 12/11, Metal, and OpenGL/OpenGL ES/WebGL.

The HAL layer is consumed directly by user applications or libraries. HAL is also used in efforts such as gfx-portability.

See the Big Picture blog post for connections.

The old gfx crate (pre-ll)

This repository was originally home to the gfx crate, which is now deprecated. You can find the latest versions of the code for that crate in the pre-ll branch of this repository.

The master branch of this repository is now focused on developing gfx-hal and its associated backend and helper libraries, as described above. gfx-hal is a complete rewrite of gfx, but it is not necessarily the direct successor to gfx. Instead, it serves a different purpose than the original gfx crate, by being "lower level" than the original. Hence, the name of gfx-hal was originally ll, which stands for "lower level", and the original gfx is now referred to as pre-ll.

The spiritual successor to the original gfx is actually wgpu, which stands on a similar level of abstraction to the old gfx crate, but with a modernized API that is more fit for being used over Vulkan/DX12/Metal. If you want something similar to the old gfx crate that is being actively developed, wgpu is probably what you're looking for, rather than gfx-hal.

Contributing

We are actively looking for new contributors and aim to be welcoming and helpful to anyone that is interested! We know the code base can be a bit intimidating in size and depth at first, and to this end we have a label on the issue tracker which marks issues that are new contributor friendly and have some basic direction for completion in the issue comments. If you have any questions about any of these issues (or any other issues) you may want to work on, please comment on GitHub and/or drop a message in our Matrix chat!

License

This repository is licensed under either of

at your option.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

wgpu's People

Contributors

bors[bot] avatar crowlkats avatar cwfitzgerald avatar daxpedda avatar dependabot[bot] avatar electronicru avatar erichdongubler avatar expenses avatar fornwall avatar frizi avatar gabrielmajeri avatar gordon-f avatar grovesnl avatar i509vcb avatar imberflur avatar jcapucho avatar jimblandy avatar jinleili avatar kvark avatar lachlansneff avatar msiglreith avatar napokue avatar nical avatar pjoe avatar rukai avatar scoopr avatar teoxoy avatar wumpf avatar xiaopengli89 avatar zicklag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wgpu's Issues

Path to avoid deadlocks

Our HUB has a bunch of storages that we lock for read or write where needed. Once the entry points start being called from different threads, we can easily deadlock between two entry points. I think we can solve this by enforcing the order of locks: any entry point has to strictly lock the hub resources in the order their storages are declared (in hub.rs).

Explanation

Supposing we make a node of a graph for every storage, and we mark an arrow from A to B to C if an entry point locks storage A followed by B followed by C. A deadlock occurs if at some point these arrows form a cycle, i.e. entry X locks A -> B -> C, while entry Y locks B -> C -> A. Of course, a cycle can also be formed by more than two entry points.

Now, let's assume the code follows a strict order for locking like suggested above. Imagine we hit a deadlock. Let's take a look at the last storage type being locked (pick a subset of all storages that are locked and take the last according to our order), call it A. Since we are in the deadlock, it's locked by some entry X and some other entries try (but fail) to lock it as well. However, there is nothing preventing X from proceeding, since whatever else it needs has to be locked after A, and those are not locked at this point. So X can proceed and free A eventually, so this is not a deadlock.

Plan

  1. Audit the code and re-order the locks.
  2. Make sure that non-HUB resources are also locked in the order (whatever it is).
  3. Document the rules thoroughly in the code.
  4. Implement run-time debug checks for the order, asserting accordingly.

Resizing & Unfocusing Cube example will cause issues with rendering macOS.

Resizing and moving the Cube example out of focus will cause some artefacts (flickering) and no longer render properly (new instances).
This happens with both unmodified version of Cube example as well as my Quad example with SDL2.
Easier to reproduce with SDL2 (doesn't require to un-focus window), but happens either way.

No longer crashes though as it did with #58

hello_triangle fails to compile on MacOS

Running the following command

cargo run --bin hello_triangle --features metal

Produces this error:

warning: unused manifest key: package.edition
warning: unused manifest key: package.edition
warning: unused manifest key: package.edition
warning: unused manifest key: package.edition
warning: unused manifest key: package.edition
   Compiling winit v0.18.1
error[E0658]: use of unstable library feature 'option_filter' (see issue #45860)
   --> /Users/cshenton/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.18.1/src/platform/macos/util/cursor.rs:127:10
    |
127 |         .filter(|image| image.isValid() == YES)
    |          ^^^^^^

error: aborting due to previous error

For more information about this error, try `rustc --explain E0658`.
error: Could not compile `winit`.

To learn more, run the command again with --verbose.

OS: MacOS High Sierra 10.13.6
Xcode: 10.1
Rust: 1.26.2
Cargo: 1.26.0

Borrowing in Rust API

The current approach attempts to be safe, and for that reason we borrow things. For example, recording a render pass leaves the command buffer in a borrowed state. Unfortunately, some of those are more difficult:

  • do we borrow the device when returning a queue?
  • do we borrow the swap chain when acquiring a frame?

Perhaps, there would be 2 different Rust wrappers: one that does borrowing and more idiomatic but less convenient, and one that uses the copy semantics more aggressively, and we don't have to provide both of those (but someone else might).

Stronger typed resource tracker

Current Tracker interface is run-time driven, i.e. you provide TrackPermit whenever you need to transition anything. In fact, we use tracking for multiple different things:

  1. on the device, for tracking states between command buffers and gluing them correctly. These always use REPLACE, they need only the initialization logic of query(), and they need to support consume() on other trackers (coming from group 2). They don't need the "init" state.
  2. on the command buffer, for tracking states of used resources between commands. These always use REPLACE, and they need to support consume() on other trackers (coming from group 3). They do need to keep both the "init" state and the "last" one.
  3. on render passes and bind groups, for figuring out the exact state (no transitions!) of each used resource. These always use EXTEND and never need to consume anything. They don't need "init" state and can just update the current one.

Perhaps, we could do some clever typing and improve the internal ergonomics by lifting those workflows to the type level?

Crash when moving Window to background in Example

This sample crashes when running with Metal on macOS Mojave.

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ()', src/libcore/result.rs:1009:5
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:71
   2: std::panicking::default_hook::{{closure}}
             at src/libstd/sys_common/backtrace.rs:59
             at src/libstd/panicking.rs:211
   3: std::panicking::default_hook
             at src/libstd/panicking.rs:227
   4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at src/libstd/panicking.rs:491
   5: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:398
   6: std::panicking::try::do_call
             at src/libstd/panicking.rs:325
   7: core::char::methods::<impl char>::escape_debug
             at src/libcore/panicking.rs:95
   8: <parking_lot::raw_mutex::RawMutex as lock_api::mutex::RawMutex>::unlock
             at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libcore/macros.rs:26
   9: <parking_lot::raw_mutex::RawMutex as lock_api::mutex::RawMutex>::unlock
             at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libcore/result.rs:808
  10: core::alloc::Layout::repeat
             at wgpu-native/src/swap_chain.rs:159
  11: wgpu::TextureCopyView::into_native
             at wgpu-rs/src/lib.rs:673
  12: core::ptr::real_drop_in_place
             at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libcore/ptr.rs:204
  13: cube::framework::run
             at gfx-examples/src/framework.rs:111
  14: cube::main
             at gfx-examples/src/cube.rs:330
  15: std::rt::lang_start::{{closure}}
             at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libstd/rt.rs:74
  16: std::panicking::try::do_call
             at src/libstd/rt.rs:59
             at src/libstd/panicking.rs:310
  17: panic_unwind::dwarf::eh::read_encoded_pointer
             at src/libpanic_unwind/lib.rs:102
  18: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at src/libstd/panicking.rs:289
             at src/libstd/panic.rs:398
             at src/libstd/rt.rs:58
  19: std::rt::lang_start
             at /rustc/9fda7c2237db910e41d6a712e9a2139b352e558b/src/libstd/rt.rs:74
  20: <cube::Vertex as core::clone::Clone>::clone

Cargo failing to load git submodule

It looks like cargo doesn't support recursively loading git submodules in dependencies.

It works fine when one checks out the entire crate locally and depends on that directly, but if I point my crate's dependency at this git repository, I get the following:

$ cargo build
    Updating git repository `https://github.com/gfx-rs/wgpu`
error: failed to load source for a dependency on `wgpu`

Caused by:
  Unable to update https://github.com/gfx-rs/wgpu

Caused by:
  failed to update submodule `examples/vendor/glfw`

Caused by:
  no URL configured for submodule 'examples/vendor/glfw'; class=Submodule (17)

[DX11] Memory leak on resize

The examples all leak memory on windows with the dx12 backend. The dx11 backend appears to be fine at first, but leaks when you resize the window. Making the window smaller also breaks rendering with dx11.

The vulkan backend does not exhibit this behavior.

vk: shadow example validation errors

Validation errors when running the shadow example. Workaround for freeing of swapchain images by removing image destroy calls.

ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkCmdCopyBuffer-srcBuffer-00118 ] Object: 0x2f (Type = 9) | Invalid usage flag for Buffer 0x2f used by vkCmdCopyBuffer(). In this case, Buffer should have VK_BUFFER_USAGE_TRANSFER_SRC_BIT set during creation. The Vulkan spec states: srcBuffer must have been created with VK_BUFFER_USAGE_TRANSFER_SRC_BIT usage flag (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkCmdCopyBuffer-srcBuffer-00118)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkCmdCopyBuffer-srcBuffer-00118 ] Object: 0x2f (Type = 9) | Invalid usage flag for Buffer 0x2f used by vkCmdCopyBuffer(). In this case, Buffer should have VK_BUFFER_USAGE_TRANSFER_SRC_BIT set during creation. The Vulkan spec states: srcBuffer must have been created with VK_BUFFER_USAGE_TRANSFER_SRC_BIT usage flag (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkCmdCopyBuffer-srcBuffer-00118)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652b2150 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 0, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652ac860 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 1, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652ac860 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 8, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652ac860 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 2, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652ac860 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 3, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652ac860 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 4, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652ac860 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 5, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652ac860 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 6, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652ac860 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 7, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ UNASSIGNED-CoreValidation-DrawState-InvalidImageLayout ] Object: 0x181652ac860 (Type = 6) | Cannot submit cmd buffer using image (0x2a) [sub-resource: aspectMask 0x2 array layer 9, mip level 0], with layout VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when first use is VK_IMAGE_LAYOUT_GENERAL.
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x18165174790 (Type = 6) | Attempt to reset command buffer (0x18165174790) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x1816517a080 (Type = 6) | Attempt to reset command buffer (0x1816517a080) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x1816517f970 (Type = 6) | Attempt to reset command buffer (0x1816517f970) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x18165185260 (Type = 6) | Attempt to reset command buffer (0x18165185260) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x1816518ab50 (Type = 6) | Attempt to reset command buffer (0x1816518ab50) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x18165190440 (Type = 6) | Attempt to reset command buffer (0x18165190440) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x18165195d30 (Type = 6) | Attempt to reset command buffer (0x18165195d30) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x1816519b620 (Type = 6) | Attempt to reset command buffer (0x1816519b620) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x181651a0f10 (Type = 6) | Attempt to reset command buffer (0x181651a0f10) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)
ERROR 2019-02-26T21:04:59Z: gfx_backend_vulkan: [Validation]  [ VUID-vkResetCommandBuffer-commandBuffer-00045 ] Object: 0x181651a6800 (Type = 6) | Attempt to reset command buffer (0x181651a6800) which is in use. The Vulkan spec states: commandBuffer must not be in the pending state (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkResetCommandBuffer-commandBuffer-00045)

Derive Debug for everything.

Maybe related to #72 - in which close.

I rely a lot on println and I want to be able to see the fields of various objects.
Like, I want to know which adapter is currently in use and there is no way(?) as of now.

cube and shadow have validation errors

I'm seeing validation errors for the cube and shadow examples-gfx:

m4b@efrit ::  [ ~/git/wgpu ] target/debug/cube 
Xlib:  extension "NV-GLX" missing on display ":0".
ERROR 2019-04-25T06:29:06Z: gfx_backend_vulkan: [Validation]  [ VUID-vkMapMemory-size-00681 ] Object: 0x1f (Type = 8) | Mapping Memory from 0x0 to 0x80 oversteps total array size 0x48. The Vulkan spec states: If size is not equal to VK_WHOLE_SIZE, size must be less than or equal to the size of the memory minus offset (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkMapMemory-size-00681)

os: archlinux
hash: 0a60c82
some vulkaninfo:

Vulkan Instance Version: 1.1.96
=====================
GPU id       : 0 (Intel(R) HD Graphics 620 (Kaby Lake GT2))

Here is shadow validation errors (appear to be same/similar):

ERROR 2019-04-25T06:28:56Z: gfx_backend_vulkan: [Validation]  [ VUID-vkMapMemory-size-00681 ] Object: 0x10fe (Type = 8) | Mapping Memory from 0x0 to 0x1c0 oversteps total array size 0x190. The Vulkan spec states: If size is not equal to VK_WHOLE_SIZE, size must be less than or equal to the size of the memory minus offset (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkMapMemory-size-00681)
ERROR 2019-04-25T06:28:56Z: gfx_backend_vulkan: [Validation]  [ VUID-vkMapMemory-size-00681 ] Object: 0x1101 (Type = 8) | Mapping Memory from 0x0 to 0x1c0 oversteps total array size 0x190. The Vulkan spec states: If size is not equal to VK_WHOLE_SIZE, size must be less than or equal to the size of the memory minus offset (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkMapMemory-size-00681)
ERROR 2019-04-25T06:28:56Z: gfx_backend_vulkan: [Validation]  [ VUID-vkMapMemory-size-00681 ] Object: 0x1104 (Type = 8) | Mapping Memory from 0x0 to 0x1c0 oversteps total array size 0x190. The Vulkan spec states: If size is not equal to VK_WHOLE_SIZE, size must be less than or equal to the size of the memory minus offset (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkMapMemory-size-00681)
ERROR 2019-04-25T06:28:56Z: gfx_backend_vulkan: [Validation]  [ VUID-vkMapMemory-size-00681 ] Object: 0x1107 (Type = 8) | Mapping Memory from 0x0 to 0x1c0 oversteps total array size 0x190. The Vulkan spec states: If size is not equal to VK_WHOLE_SIZE, size must be less than or equal to the size of the memory minus offset (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkMapMemory-size-00681)
ERROR 2019-04-25T06:28:56Z: gfx_backend_vulkan: [Validation]  [ VUID-vkMapMemory-size-00681 ] Object: 0x110a (Type = 8) | Mapping Memory from 0x0 to 0x1c0 oversteps total array size 0x190. The Vulkan spec states: If size is not equal to VK_WHOLE_SIZE, size must be less than or equal to the size of the memory minus offset (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkMapMemory-size-00681)

Functionality of HUB

After a few hours reading context of HUB, I think it just convert all the void * to the right &'a (mut) T. Is it right?

Another problem is that there is no code completion with open with CLion.

Object lifetime management

Automatic lifetime management is one of the main usability/safety gains of WebGPU API.

First problem to solve - figure out what layer is responsible for this. Since wgpu-native exposes simple IDs and it doesn't know if the user keeps any of those, it can't know when resources are destroyed, unless specifically told to do so. It can, however, track the resource usage in pending command buffers and executing on GPU. That is why I believe the lifetime should go through the following stages at wgpu-native level:

  1. resource is created by the user
  2. resource is used for other resources or in commands
  3. user explicitly destroys the resource
  4. all dependent objects and command buffers are destroyed
  5. GPU has finished execution of any relevant command buffers
  6. the implementation actually destroys the resource

At the same time, wgpu-rs needs to be able to avoid explicit destruction and just use RAII for that. The other clients, like Gecko, know when a resource is garbage collected and can send us appropriate message for destruction.

Second problem - how exactly to implement that tracking on wgpu-native level. I was initially thinking about have all objects to carry something like this:

pub(crate) struct LifeGuard {
    pub owned_by_user: AtomicBool,
    pub used_by_objects: AtomicUsize,
    pub used_by_commands: AtomicUsize,
    pub last_submission_index: AtomicUsize,
}

However, I later realized that we could instead enrich the Stored wrapper semantics to keep track of the usage. The sub-problem here is for Stored to not have Arc<Id> since it involves another indirection on access.

C-first versus Rust-first approach

Currently, the library exposes extern C API functions. A possible (strongly typed, nicer) Rust wrapping is expected to base on it, but it would have to operate on Handle things instead of placing the actual sub-structures where it needs. This is a possible (even if minor) performance hit that we are likely OK with (since Rust clients using wgpu are not the most demanding in terms of CPU perf).

Another benefit of C-first approach is increased chance from Gecko contributors who don't care much about Rust ecosystem wrappers. Still, the issue is open for discussion, since gfx-portability took a different approach (Rust-first) by basing on gfx-hal.

Multi-crate structure

It looks like we should transform the repository to host multiple crates:

  • "native" wgpu implementation
  • remoting layer - #7
  • Rust wrapper

cc @grovesNL

Separate winit optional feature

With #62 winit is now implied when building in "local" mode. We should still make it optional. That may require changes to gfx-rs first, since all of the backends (except emtpy) have it enabled by default.

NodeJS bindings

These would be required in order to run the WebGPU CTS (cc @kainino0x). I wonder if we can auto-generate it from Rust, like we generate C bindings.

Overflow panic when creating a buffer of size 0

Creating a buffer of size 0 with create_buffer_mapped causes the following panic:

thread '<unnamed>' panicked at 'attempt to subtract with overflow', /home/rubic/.cargo/git/checkouts/wgpu-53e70f8674b08dd4/993293f/wgpu-native/src/device.rs:527:21

I don't know if it should panic here, but it should at least have a better panic message.

Happy to make a PR for a better message if that's the correct solution.

Support non-gfx backends

At the end of the day, when WebGPU is standardized, and there is a solid test suite provided, we will consider adding direct mapping backends to GL and potentially Metal.

Cargo test results in compile error on Windows

$ pwd
/g/rust/wgpu

$ cargo test
   Compiling wgpu-native v0.2.0 (G:\rust\wgpu\wgpu-native)
error[E0425]: cannot find value `raw` in this scope
   --> wgpu-native\src\instance.rs:122:24
    |
122 |     SurfaceHandle::new(raw)
    |                        ^^^ not found in this scope

Capturing internal state for debug investigations

We need to introduce some infrastructure of capturing the internal state, so that we can have a closer look at things when stuff goes out of the expected way. Most interesting things:

  • metadata of all of the resources (by desriptors we pass at creation?)
  • state of resource trackers on all the things: devices, command buffers, passes
  • references from all around the world: other resources, trackers, active submissions

From this, we could go a step further and ask the capture about what are all the things connected to a single node (e.g. a texture), to get an idea of how it's referenced and transitioned. This could possibly be done off-line, as an extra analysis based on the captured data.

Native translation to FXC binaries

The FXC is not publicly specified, but there is some information about its structure:
http://timjones.io/blog/archive/2015/09/02/parsing-direct3d-shader-bytecode

It would be very nice to be able to generate it natively from SPIR-V. This would greatly improve the performance of (uncached) pipeline creation when working with API abstraction libraries that take SPIR-V input (gfx-rs/gfx#1374). This in turn could give us an edge comparing to NXT and MoltenVK.

The downsides are:

  • difficult to debug
  • working without clear specification

Support WebGL (via `gfx-backend-gl`)

I know this is kind of a goofy request as WebGPU itself is meant to run web "natively", but given that it is currently still in development on all browsers (according to the Chrome Platform Status) and gfx-backend-gl supports WebGL, would it be reasonable to have wgpu support a WebGL backend?

I imagine that this could be beneficial for those targeting WebGPU but want to run on browsers that only support WebGL.

Thanks for your time!

Empty window in MacOS 10.12

Observed behavior:

On MacOS 10.12: Compiling the hello_triangle example succeeds, but executing the binary brings up an empty window (gray background).

After upgrading to MacOS 10.14 the previously compiled hello_triangle executable works fine! (No recompilation necessary.)

Expected behavior:

Perhaps wgpu could be made to work on MacOS 10.12. Alternatively, if wgpu is incompatible with MacOS 10.12, I would expect it to report an error.

Track index epochs

We need to associate an epoch with each index. This would allow preventing any internal inconsistency of managing resource lifetimes.

VK_FORMAT_R8G8B8A8_SRGB is not a supported vertex buffer format

When I try to use wgpu::VertexAttributeDescriptor with format wgpu::VertexFormat::Ushort2Norm, I get the error:

ERROR 2019-03-11T01:06:21Z: gfx_backend_vulkan: [Validation] [ VUID-VkVertexInputAttributeDescription-format-00623 ] Object: VK_NULL_HANDLE (Type = 0) | vkCreateGraphicsPipelines: pCreateInfo[0].pVertexInputState->vertexAttributeDescriptions[2].format (VK_FORMAT_R8G8B8A8_SRGB) is not a supported vertex buffer format. The Vulkan spec states: format must be allowed as a vertex buffer format, as specified by the VK_FORMAT_FEATURE_VERTEX_BUFFER_BIT flag in VkFormatProperties::bufferFeatures returned by vkGetPhysicalDeviceFormatProperties (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-VkVertexInputAttributeDescription-format-00623)

I think that it is:

UcharNorm => H::R8Srgb,
Uchar2Norm => H::Rg8Srgb,
Uchar3Norm => H::Rgb8Srgb,
Uchar4Norm => H::Rgba8Srgb,
Uchar4NormBgra => H::Bgra8Srgb,

Should be like this:

        UcharNorm => H::R8Unorm, 
        Uchar2Norm => H::Rg8Unorm, 
        Uchar3Norm => H::Rgb8Unorm, 
        Uchar4Norm => H::Rgb8Unorm, 
        Uchar4NormBgra => H::Bgra8Unorm, 

Single-recording mode for passes

Currently, we are hitting a problem with DX12 backend whenever we use render passes: it doesn't support more than a single command buffer in a recording state at a time per command pool. We keep the root command buffer open while the pass command buffer is recorded, so that we can append necessary resource transitions at the end of that root command buffer.

We need a mode, in which we'd be closing the root command buffer, then recording the pass command buffer, and only afterwards record an additional command buffer for transitions. Not sure if that one will even be needed in general case, since:

  1. metal backend doesn't need transitions
  2. dx12 backend has implicit conversion to/from general state

Note: the problem currently results in validation warnings, but will turn into a crash with gfx-rs/gfx#2667

[macOS] minimized and hidden (cmd+h) leaks memory and blocks.

It eventually crashes with thread 'main' panicked at 'attempt to add with overflow', wgpu-native/src/swap_chain.rs:177:27

Reproducible with #77 & master

Fullscreen laggs to the point of unusable - however a maxed out window size works as normal.
Moved to separate issue since it no longer leaks memory and seems to be a different problem.

Memory usage spiked while this issue went up, and did not go away once the window was brought back into focused and not fullscreen - so definitely a leak.

Edit: Happens with gfx-hal/quad+SDL|Winit examples as well.

Edit: Fix for minimized and hidden is in gfx-rs/gfx#2973

Edit: Seems to have appeared again #78 (comment)

vk: double render crash

Info: Windows 10, Vulkan, Nvidia Geforce 1080.

Submitting to the device twice in the same frame causes a crash. Easiest to reproduce by taking cube.rs 's render() function and repeating its contents verbatim inside a {...} block.

thread 'main' panicked at 'assertion failed: `(left == right)`
  left: `Ok(())`,
 right: `Err(ErrorDeviceLost)`', C:\Projects\gfx\src\backend\vulkan\src\lib.rs:765:9
stack backtrace:
   0: std::sys::windows::backtrace::set_frames
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\sys\windows\backtrace\mod.rs:94
   1: std::sys::windows::backtrace::unwind_backtrace
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\sys\windows\backtrace\mod.rs:81
   2: std::sys_common::backtrace::_print
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\sys_common\backtrace.rs:70
   3: std::sys_common::backtrace::print
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\sys_common\backtrace.rs:58
   4: std::panicking::default_hook::{{closure}}
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\panicking.rs:200
   5: std::panicking::default_hook
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\panicking.rs:215
   6: std::panicking::rust_panic_with_hook
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\panicking.rs:478
   7: std::panicking::continue_panic_fmt
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\panicking.rs:385
   8: std::panicking::begin_panic_fmt
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\panicking.rs:340
   9: gfx_backend_vulkan::{{impl}}::submit<gfx_backend_vulkan::command::CommandBuffer,core::iter::FlatMap<core::slice::Iter<wgpu_native::hub::Id>, alloc::vec::Vec<gfx_backend_vulkan::command::CommandBuffer>*, closure>,gfx_backend_vulkan::native::Semaphore,core::iter::FlatMap<alloc::vec::IntoIter<wgpu_native::swap_chain::SwapChainLink<u16>>, core::option::Option<(gfx_backend_vulkan::native::Semaphore*, gfx_hal::pso::PipelineStage)>, closure>,slice<gfx_backend_vulkan::native::Semaphore>*>
             at <::std::macros::panic macros>:8
  10: wgpu_native::device::wgpu_queue_submit
             at C:\Projects\wgpu\wgpu-native\src\device.rs:1155
  11: wgpu::Queue::submit
             at C:\Projects\wgpu\wgpu-rs\src\lib.rs:787
  12: cube::{{impl}}::render
             at .\src\cube.rs:385
  13: cube::framework::run<cube::Example>
             at .\src\framework.rs:127
  14: cube::main
             at .\src\cube.rs:394
  15: std::rt::lang_start::{{closure}}<()>
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\src\libstd\rt.rs:64
  16: std::rt::lang_start_internal::{{closure}}
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\rt.rs:49
  17: std::panicking::try::do_call<closure,i32>
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\panicking.rs:297
  18: panic_unwind::__rust_maybe_catch_panic
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libpanic_unwind\lib.rs:92
  19: std::panicking::try
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\panicking.rs:276
  20: std::panic::catch_unwind
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\panic.rs:388
  21: std::rt::lang_start_internal
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\/src\libstd\rt.rs:48
  22: std::rt::lang_start<()>
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858\src\libstd\rt.rs:64
  23: main
  24: invoke_main
             at d:\agent\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:78
  25: __scrt_common_main_seh
             at d:\agent\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288
  26: BaseThreadInitThunk
  27: RtlUserThreadStart
error: process didn't exit successfully: `C:\Projects\wgpu\target\debug\cube.exe` (exit code: 101)

Handle IPC descriptor serialization

Follow-up from #20:

  • How do we need to handle serialization/deserialization for descriptors when using IPC?
  • Which parts need to be handled within wgpu-native vs. the caller?
  • Should we change the types of descriptors to const uint8_t *desc (i.e. binary serialized) for the remote case in wgpu.h? (already done)

Support for arrays of textures

Hey, I'm trying to build a sprite renderer on top of wgpu, and one issue I'm running into is how to easily switch between textures on a per-frame basis. Ideally I don't want to have to create a BindGroup for each texture, and 2D textures aren't ideal either as the textures are not all the same size. One option that is interesting is to create an array of texture descriptors that I can index with a uniform. Something like this:

layout(set = 0, binding = 1) uniform texture2D textures[1024];

but this doesn't seem to be supported by wgpu - looking through the code I found: https://github.com/gfx-rs/wgpu/blob/master/wgpu-native/src/device.rs#L840 - which suggests that the descriptorCount cannot be configured.

Is this something in the works, or is there a better way I haven't considered?

Thanks

Use strongly typed Ids

With #[repr(transparent)] we should be able to wrap our Id new newtypes to avoid accidental collisions. This is important for aliased indices especially: QueueId vs DeviceId, CommandEncoderId vs CommandBufferId.

Per-object uniform data

I may be misunderstanding the API, but it doesn't seem like it's possible to have uniform data (eg. transformations) per object/draw call. I'm guessing this can be done if offsets were passed in to RenderPass::set_bind_group:

            pass.raw.bind_graphics_descriptor_sets(
                &&pipeline_layout_guard[pipeline_layout_id].raw,
                index as usize,
                bind_groups,
                &[], // <--------- here
            );

or if there was access to push constants.

Is there another way to have per-draw uniform data?

I'm refering to this for vulkan: https://github.com/nvpro-samples/gl_vk_threaded_cadscene/blob/master/doc/vulkan_uniforms.md

Remoting layer

In order for this project to be used in Gecko we need to support the fact that WebGPU API bindings are on the content/script process, while the actual graphics context is on the GPU process.

Here comes the remoting layer. It's basically a native C-API library with same (or close) interface that routes the WebGPU calls across an IPC barrier. We may need multiple channels maintained simultaneously (Web worker communicating with the command pool on the GPU side when recording commands), but we can start with a single one (aka Google's "wire").

`hello_triangle` example makes computer run sluggishly.

Hey, so I was running the hello_triangle example (well, I modified the shaders, but at the moment I'm not drawing anything at all so that shouldn't effect performance, and I have also tried with the plain example as it was as well) and the program seems to be slowing down my whole DE by a considerable amount. I can only assume this is not normal, so I decided to create this issue.

For info:
OS: Pop!_OS (Linux kernel version 4.18.0-15-generic)
CPU: AMD Ryzen 5 2600 (12) @ 3.400GHz
GPU: NVIDIA GeForce GT 1030
RAM: 8GiB DDR4

Video:
https://www.youtube.com/watch?v=V8RmLH4z2Ec

map_read_async is never called

I setup a map_read_async callback like this:

framebuffer_out.map_read_async(0, width as u32 * height as u32 * 4, |result: wgpu::BufferMapAsyncResult<&[u32]>| {
        if let wgpu::BufferMapAsyncResult::Success(data_u32) = result {
            println!("SUCCESS");
            for data in data_u32 {
                println!("{:x}", data);
            }
        }
        else {
            println!("ERROR");
        }
    });

And the callback is never called.

Nothing is hit in the validation layers.

Full example here, where I attempt to draw to a texture, then copy the texture to a buffer. and then display it:

Non-borrowing rust bindings

Related to #37 , also commentary from @omni-viral

Our current Rust bindings assume unique ownership over the IDs. This is idiomatic but sometimes inconvenient for the clients that need more freedom. Since our native layer is safe, we can have the IDs copyable on the Rust level as well, it's just going to be slightly different kind of API.

It would be nice not to fragment the community with another Rust wrapper. Perhaps, we can just control it by a feature? I.e.

#[cfg_attr(feature = "copy", derive(Clone, Copy))]
struct Buffer {
  id: BufferId,
}

impl<'a> RenderPassEncoder<'a> {
  #[cfg(feature = "copy")]
  pub fn end_pass(self) { ...
  }
}

Triangle example has validation layer errors

These two validation layer errors are repeatedly displayed while running the the triangle example with the latest wgpu git master.
OS: Arch Linux
GPU: GTX 960
Driver: Nvidia proprietary driver, version 418

        [0] 0x56010ba02080, type: 6, name: NULL
ERROR 2019-03-28T08:00:06Z: gfx_backend_vulkan: [Validation]  [ VUID-vkBeginCommandBuffer-commandBuffer-00049 ] Object: 0x56010ba009a0 (Type = 6) | Calling vkBeginCommandBuffer() on active command buffer 56010ba009a0 before it has completed. You must check command buffer fence before this call. The Vulkan spec states: commandBuffer must not be in the recording or pending state. (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkBeginCommandBuffer-commandBuffer-00049)
VUID-vkBeginCommandBuffer-commandBuffer-00049(ERROR / SPEC): msgNum: 0 - Calling vkBeginCommandBuffer() on active command buffer 56010ba009a0 before it has completed. You must check command buffer fence before this call. The Vulkan spec states: commandBuffer must not be in the recording or pending state. (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkBeginCommandBuffer-commandBuffer-00049)
    Objects: 1
        [0] 0x56010ba009a0, type: 6, name: NULL
ERROR 2019-03-28T08:00:06Z: gfx_backend_vulkan: [Validation]  [ VUID-vkQueueSubmit-pCommandBuffers-00071 ] Object: VK_NULL_HANDLE (Type = 6) | Command Buffer 0x56010ba009a0 is already in use and is not marked for simultaneous use. The Vulkan spec states: If any element of the pCommandBuffers member of any element of pSubmits was not recorded with the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT, it must not be in the pending state. (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkQueueSubmit-pCommandBuffers-00071)
VUID-vkQueueSubmit-pCommandBuffers-00071(ERROR / SPEC): msgNum: 0 - Command Buffer 0x56010ba009a0 is already in use and is not marked for simultaneous use. The Vulkan spec states: If any element of the pCommandBuffers member of any element of pSubmits was not recorded with the VK_COMMAND_BUFFER_USAGE_SIMULTANEOUS_USE_BIT, it must not be in the pending state. (https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-vkQueueSubmit-pCommandBuffers-00071)
    Objects: 1

[Metal/Intel] Shadow example rendering glitch

macOS 10.14.3
MacBook Pro (Retina, Mid 2012)
NVIDIA GeForce GT 650M 1024 MB
rustc 1.35.0-nightly (e68bf8ae1 2019-03-11)
Latest wgpu master at time of writing (9f70c2e)

cargo run --release --bin shadow --features=metal

I'd be happy to dig deeper but I'm not sure of the best place to start.

Screenshot 2019-03-12 at 20 28 26

hello_triangle_c panics while unwrapping power preference

Running hello_triangle_c causes the following error in on the rust side:

thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', src/libcore/option.rs:345:21
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
   1: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:70
   2: std::panicking::default_hook::{{closure}}
             at src/libstd/sys_common/backtrace.rs:58
             at src/libstd/panicking.rs:200
   3: std::panicking::default_hook
             at src/libstd/panicking.rs:215
   4: std::panicking::rust_panic_with_hook
             at src/libstd/panicking.rs:478
   5: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:385
   6: rust_begin_unwind
             at src/libstd/panicking.rs:312
   7: core::panicking::panic_fmt
             at src/libcore/panicking.rs:85
   8: core::panicking::panic
             at src/libcore/panicking.rs:49
   9: <core::option::Option<T>>::unwrap
             at /rustc/2aa4c46cfdd726e97360c2734835aa3515e8c858/src/libcore/macros.rs:10
  10: wgpu_native::instance::instance_get_adapter
             at wgpu-native/src/instance.rs:152
  11: wgpu_native::instance::instance_get_adapter
             at wgpu-native/src/instance.rs:161
  12: main
fatal runtime error: failed to initiate panic, error 5
Abort trap: 6

Steps to repro on macOS

git clone https://github.com/gfx-rs/wgpu.git
cd wgpu
cargo build 
# We now have target/debug/libwgpu_native.dylib

cd examples/hello_triangle_c
cmake .
make hello_triangle

export RUST_BACKTRACE=1
./hello_triangle_c

glfw 3.2.1
cmake 3.13.4
make 3.81
rust 1.33
macos 10.13.6
xcode 10.1

Prefer constant pointers for descriptors?

We shouldn't need to copy descriptors to control ownership on the Rust side, so I think it might generally be more efficient to accept const WGPUDeviceDescriptor *desc for example.

However for the remote use case, how do we need to handle descriptors, or binary data in general? Do we assume all of these will exist in shared memory and we get pointers directly to them? Or would descriptors be serialized?

Buffer::map_write_async is buggy and may be unsound

I'm having issues getting Buffer::map_write_async to work. If I use fill_from_slice(vertices), everything works fine, but map_write_async is problematic. It also seems to be unsound.

More often than not, my triangle does not render. I've tested with both the vulkan and the dx12 backends.

I think adding 'static to F: FnOnce(BufferMapAsyncResult<&mut [T]>) of map_write_async might fix the soundness issue, but I still was seeing buggy results even when I tried moving a clone of the vertices into the closure.

The code that I was testing with roughly looks like this:

let flags = BufferUsageFlags::TRANSFER_SRC | BufferUsageFlags::MAP_WRITE;
let temp = device.create_buffer_mapped::<[f32; 3]>(vertices.len(), flags).finish();

temp.map_write_async(0, vertices_byte_length, |buffer| {
    match buffer {
        BufferMapAsyncResult::Success(mapped) => {
            for i in 0..mapped.len() {
                mapped[i] = vertices[i];
            }
            println!("wrote vertices");
        }
        BufferMapAsyncResult::Error => {
            panic!("map failed");
        }
    }
});
// adding this still compiles and the console shows the drop before the write
println!("dropped vertices");
drop(vertices);

encoder.copy_buffer_to_buffer(&temp, 0, &vertex_buffer, 0, vertices_byte_length);
device.get_queue().submit(&[encoder.finish()]);

Edit:

I was able to force synchronization by submitting an empty command buffer prior to submitting the buffer copy. I was previously under the impression that although the map operation was asynchronous, that the order of operations would be preserved.

With that said, I think this may be working as intended, with the exception of the soundness hole.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.