Giter Site home page Giter Site logo

performance-improvements's People

Contributors

trevorfsmith avatar yonet avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

performance-improvements's Issues

Feature Request: Multi-viewport per eye to support LMS, MRS, etc..

Currently, we have a limitation of two eyes, returning two sets of projection and view matrices.

To support more efficient rendering, some platforms support features such as Lens Matched Shading and Multi Resolution Shading:

https://developer.nvidia.com/vrworks/graphics/lensmatchedshading
https://developer.nvidia.com/vrworks/graphics/multiresshading

In these cases, additional render targets are created with differing projections.

I would like feedback and to explore ideas surrounding how the WebVR API could be forward-compatible with these rendering approaches.

Ideally, there could be a unified approach to handling other upcoming optimizations, such as foveated rendering.

With the current WebGL 2.0 API, multiple render passes would be required, but this is certainly still beneficial. We should consider how this API will fit in the context of post-WebGL2 modern graphics apis.

Need a way to determine whether rendering is falling below target frame rate

Issue by brianchirls
Monday Apr 04, 2016 at 00:09 GMT
Originally opened as MozillaReality/webvr-spec#29


I've been thinking about the possibility of adapting render quality to achieve the optimal frame rate, as described in this article. The article mentions things like adjusting the pre-distortion render buffer scale, adjusting anti-aliasing detail and some more complicated ideas. I was also thinking about adjusting the far end of z-clipping (with some fog) as it's appropriate for content, levels of detail, etc.

In order to do this, we need to know how well we're doing at meeting the target frame rate. The simplest way I can think of doing this is to report the frame rate of the VRDisplay device. Presumably, this would be fixed for now, but I suppose not necessarily forever. I suppose there are some arguments to be made against providing this information, but since there are annoying but feasible ways to figure it out, we may as well provide it easily and correctly.

One way that this falls short, however, is that if you are rendering at the optimal frame rate, there isn't really a good way to know if you've got a little extra time so you can bump the quality up or run some other tasks. You'd have to try raising the quality and then waiting to see if that drops the frame rate, which is certainly not ideal. If anybody has any ideas about that, I'd love to hear them.

cc: @toji, per our chat in webvr slack

Enable developers to select between multiple view configurations

A follow up from this discussion at TPAC: https://www.w3.org/2018/10/25-immersive-web-minutes.html#item05

It's important to know whether any of the VR systems require more than two images sent to the system. I.e. does a headset with 4 screens need a developer to provide 4 images (one per screen) or does it handle 2 images (one per eye) and do the rest itself?

Summary from @toji

Dropping in some additional context from an internal conversation to make sure we don't lose sight of what this is about:

First off, this isn't really about displays but how many views the application has to render, which do not have a 1:1 correlation. Example: The Oculus Rift has two physical displays, while the Oculus Go has a single physical display. Both would request two views.

Second, developers don't want to be forced to support, say, a 12-view CAVE system if all they really care about is headsets. From a technical perspective using techniques like multiview rendering typically requires you to declare up-front a max number of supported views, and some systems are limited to an upper bound of ~4. It wouldn't be productive to force people to always code to a theoretical upper bound when there's extremely little chance their content will be viewed that way.

Finally, certain systems like the StarVR headset with 4 physical displays will optimally render 4 views in order to make the best use of the hardware. They do have a mode, however (primarily for backwards compat with OpenVR games) where you can render only render two views with a larger FOV. This might be faster for the app, and it makes content that's not designed for the display accessible, but it also doesn't make use of the full FOV the hardware is capable of. It would be nice if developers had a way to choose between those two modes.

FWIW the ability to limit which configs you support meshes nicely with OpenXR. (See Page 42 of this slide deck) I don't know that we want to take the exact same route, however, because that feels like a good way to dump fingerprinting info into the developers lap prior to session creation.

Expose a hidden area mesh or a hidden area depth write function

A simple optimization that VR native applications are already taking advantage of is to inhibit fragment shading of regions of the display panels that, after warping and optics, are not visible to the user.

OpenVR exposes such information as a mesh:

https://github.com/ValveSoftware/openvr/wiki/IVRSystem::GetHiddenAreaMesh

Essentially, such a mesh is rendered before the rest of the scene geometry using either depth buffer or stencil buffer writes to inhibit fragment processing of those pixels in subsequent render passes.

This can provide significant performance improvements without relying on any special rendering API functions. This is also much easier to integrate into rendering engines relative to other optimizations (eg, multiview).

If others feel this is a good idea, I'd be glad to write up some WebIDL for such a function.

Add support for an optional stabilization plane

Can we add another variable stabilization_depth to XRRenderState?

It defines a distance where the engine will focus its stabilization. Even though it's possible to estimate it from the depth buffer, it's really only the author that really knows this information.

We could specify that non-AR sessions ignore this value.
A reasonable default would be 1.0f and the value must be between depthNear and depthFar.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.