Giter Site home page Giter Site logo

unity-depthapi's People

Contributors

facebook-github-bot avatar markusscmeta avatar tudorjude avatar vasylbometa avatar vladimirmakaev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unity-depthapi's Issues

Issue / Suggestion

Is there any way to make occlusion work with UI elements such as sprites and images etc? It really break off my game. Also any more info on the occluded shadows solution? Thanks

Missing prefabs

The git clone was successful but when I open the scenes I have missing prefabs and these errors. I am using Unity 2022.3.17f1, the newest meta xr all in one sdk, and https://github.com/oculus-samples/Unity-StarterSamples which all of those demo scenes work. I tried passthrough example in the editor and it works also, I wanted to import this depth api to try it out but has errors and a missing Sceneswitcher prefab:
error

Occlusion Lit URP shader not occluding

I think there might be a bug with the Lit URP shader where occlusion does not seem to work on it at all. This only seems to be an issue on URP because the Occlusion Standard BiRP shader works fine on a BiRP project.

Here's an example video. I'm using a Quest 3. The cube on the left is using URP Occlusion Lit shader, and the cube on the right is using URP Occlusion Unlit. You'll see the occlusion only works on the unlit cube.

occlude.mp4

I'm on the latest versions of the Depth API and Depth API URP support packages, and have 4.2.0-exp-env-depth.1 of the Oculus XR plugin.

Screenshot 2023-11-20 at 00 38 43

I have the EnvironmentDepthOcclusion prefab in the scene.

Screenshot 2023-11-20 at 00 39 04

I also have the OVRSceneManager prefab in the scene with a Global Mesh having a shadow receiver. I also tried using the Selective Passthrough shader on the Global Mesh and that doesn't solve the issue (the Selective Passthrough shader in general doesn't seem to work in URP, but I believe that's a separate issue, since Depth API should be able to do occlusion regardless if I'm understanding right?)

Screenshot 2023-11-20 at 01 44 35

And all PST checks pass as well!

Unity editor, material becomes invisible

In unity editor once you run it on oculus link and hit play it makes the occlusion materials invisible, just in the editor, they are still visible on build, and when the project is not running

Using Shader Graph

Hey, is there a way to use shader graph for this? Is it possible to wire the given vertex / fragment pieces to the corresponding stages in shader graph?

Implement for 2 eyes cameras

Thanks for your great work!
I found that the occlusion is correct in left eye but not in right eye, there's a bias between my finger(for example) and the occlusion hole.
How to fix it?

Feature Request: Hand Removal w/Virtual Gloves & Controllers

If I understand correctly, hand removal removes the hands from the depth based occlusion and then uses a mask from the hand tracking.

In a game where the player is wearing gloves, it would by nice to have the option to remove hands (including when using controllers) to avoid the occlusion messing with gloves and/or weapons etc.

URP Occlusion Sub Graph don't seems to work with 2023.2

I have no occlusion at all using the subgraph "OcclusionSubGraph" in my custom shadergraph on unity. Pre made shader works with 2023.2. And my shadergraph works with 2022.3.
I can sent a super simple example project if needed.

No Occlusion show on built file

I have cloned the project to my PC. Then open it in 2 separate Unity project, one for URP and one for BIRP.
The problem is, when I built into APK,
-In TogglerOcclusion scene, when I press A button to change the occlusion type, all the grab object disappear.
-In the other scene, I can only see No Occlusion objects.

All the settings are remained as newly cloned.

normal based depth occlusion

I would like to achieve a normal based depth occlusion so that you can, for instance, render something behind a wall or roof that would be elsewhere occluded, any advice?

OcclusionSubGraph with custom shader

Hello,

Thanks a lot for this wonderful API.

I got a question regarding the OcclusionSubGraph. I am trying to use it with a custom shader but the occlusion does not work.

I tried the DepthAPI-URP sample and look at the LitOccluded and StylizedOcclusionEffectON shader. But when I built the project and tried it, the occlusion did not work.

So I don't know how to make it work both in the sample scene and in my custom one. Do you have any advice ?

System :
Unity 2022.3.16 URP
Meta Quest 3

Thanks in advance for your answer

In PC Mode, single pass Instanced, use HighlightsAndShadows.shader only one eye render

i want Add HighlightsAndShadows.shader to the scene. In single pass in PC mode, only the left eye can be rendered, but the right eye cannot be rendered.

I followed the URL below to make modifications without success.
https://docs.unity3d.com/Manual/SinglePassInstancing.html
https://www.youtube.com/watch?v=JiCJN8EvoCA

I think the effect of using depth occlusion and AR shadow is very realistic. Please work together to solve this problem. Thanks.

unity 2022.3.17;
depth api V60;
xr.management 4.41;
meta.xr.sdk.core V62;
meta.xr.mrutilitykit V60

Raycasting onto the Depth map

Firstly, thank you so much for this great repo.

I was wondering how I can raycast toward the depth map generated by Depth API - so I can make use of the hit 3D point. Is there a specific method/script for doing this?

This page mentions "Raycasting: using rays to point at physical surfaces. Supports use cases like content placement" - but I couldn't find information on how to do this in the context of Depth API.

Thank you!

Release Depth API for Native Development ?

Hi, I know this is possibly not the best place to post this issue.

Currently the Depth (and Mesh) APIs for the Quest 3 are Unreal and Unity plugins only, and don't seem to be exposed via OpenXR extensions. Is it possible we can expect native (C/C++) API's to access these features / data from the headset ?

Thanks.

Occlusion Standard Shader not rendering after first execution

I have been experiencing this issue where I set the Occlusion Shader to any material and it doesn't render anything, except from the editor the first time I set said Shader. If I execute, nothing that holds that Shader will be rendered on the screen, neither on the Quest 3 nor in Unity.

I also have made another project from scratch following the instructions and didn't work either. Am I missing some settings?

Unity version is: 2022.3.11f LTS
Oculus Integration SDK version is: 57.0
Rendering Pipeline is: BiRP

I attached this video to show what I mean:

DepthAPIShaders_Issue.mp4

Using the Depth API without OVRCamera prefab

Hi,

We have managed to get the depth API in and working, but as we don't use the OVRCamera prefab we get some problems.

Specifically the "holes" appear offset when the camera rig is not at the origin. Setting _customTrackingSpaceTransform didn't work. We worked around this by directly applying our own eye poses in a derivative of EnvironmentDepthTextureProvider.

We then have the problem that reprojection latency causes the occlusion to "wobble" as the head moves.

Looking at the code, there seems to be a workaround for that in the sampling - but that doesn't work for us, as we can't match camera poses to the age of the depth buffer. In theory we could if we could use the createTime field in EnvironmentDepthFrameDesc, but firstly we don't know what the scale is and (more importantly) it is always zero...

Question about func "META_DEPTH_OCCLUDE_OUTPUT_PREMULTIPLY(i, fragmentShaderResult, 0.0);"

at readme:
9. Implementing Occlusion in custom shaders
Step 4. Calculate occlusions in fragment shader
func "META_DEPTH_OCCLUDE_OUTPUT_PREMULTIPLY(i, fragmentShaderResult, 0.0);"

Does that mean the func will use i.vertex to calculate, so i need to name the variable of positonWS as "vertex"?

this is the input of my frag func:
struct CharCoreVaryings
{
float4 positionHCS : SV_POSITION;
float3 normalWS : NORMAL;
float4 color : COLOR;
float4 uv : TEXCOORD0;
float3 positionWS : TEXCOORD1;
};

rename "positionWS" as "vertex"?

Didnt work with SDKv62?

I have 2 projects with SDKv60 and v62.
Depthapi works well in SDKv60 but not in v62.
The model remains transparent all the time.
I dont know why but the version of SDK seems the only difference between them

Check if depth sensor is available

Is there a way to check if Depth Sensor is available on the Quest model an user is using?
If I enable environment depth occlusion on a Quest 3 device everything is fine, but if I do it on a Quest 2 device which does not have depth sensor materials using depth-occlusion shaders are not show at all. I need a reliable way to check (even for future Quest models) if I can enable or not depth occlusion on my scene.

DepthAPI v67 broke the depth retriever script

  • The name 'EnvironmentDepthTextureProvider' does not exist in the current context

  • name 'Depth' does not exist in the namespace 'Meta.XR' (are you missing an assembly reference?)

using Meta.XR.Depth;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.InteropServices;
using UnityEngine;

// Based on comment by TudorJude: https://github.com/oculus-samples/Unity-DepthAPI/issues/16#issuecomment-1863006589
public class EnvironmentDepthAccess : MonoBehaviour
{
    private static readonly int raycastResultsId = Shader.PropertyToID("RaycastResults");
    private static readonly int raycastRequestsId = Shader.PropertyToID("RaycastRequests");
     
    [SerializeField] private ComputeShader _computeShader;

    private ComputeBuffer _requestsCB;
    private ComputeBuffer _resultsCB;


    /**
     * Perform a raycast at multiple view space coordinates and fill the result list.
     * Blocking means that this function will immediately return the result but is performance heavy.
     * List is expected to be the size of the requested coordinates.
     */
    public void RaycastViewSpaceBlocking(List<Vector2> viewSpaceCoords, out List<float> result)
    {
        result = DispatchCompute(viewSpaceCoords);
    }

    /**
     * Perform a raycast at a view space coordinate and return the result.
     * Blocking means that this function will immediately return the result but is performance heavy.
     */
    public float RaycastViewSpaceBlocking(Vector2 viewSpaceCoord)
    {
        var depthRaycastResult = DispatchCompute(new List<Vector2>() { viewSpaceCoord });
        return depthRaycastResult[0];
    }


    private List<float> DispatchCompute(List<Vector2> requestedPositions)
    {
        UpdateCurrentRenderingState();

        int count = requestedPositions.Count;

        var (requestsCB, resultsCB) = GetComputeBuffers(count);
        requestsCB.SetData(requestedPositions);

        _computeShader.SetBuffer(0, raycastRequestsId, requestsCB);
        _computeShader.SetBuffer(0, raycastResultsId, resultsCB);

        _computeShader.Dispatch(0, count, 1, 1);

        var raycastResults = new float[count];
        resultsCB.GetData(raycastResults);

        return raycastResults.ToList();
    }

    (ComputeBuffer, ComputeBuffer) GetComputeBuffers(int size)
    {
        if (_requestsCB != null && _resultsCB != null && _requestsCB.count != size)
        {
            _requestsCB.Release();
            _requestsCB = null;
            _resultsCB.Release();
            _resultsCB = null;
        }

        if (_requestsCB == null || _resultsCB == null)
        {
            _requestsCB = new ComputeBuffer(size, Marshal.SizeOf<Vector2>(), ComputeBufferType.Structured);
            _resultsCB = new ComputeBuffer(size, Marshal.SizeOf<float>(), ComputeBufferType.Structured);
        }

        return (_requestsCB, _resultsCB);
    }

    private void UpdateCurrentRenderingState()
    {
        _computeShader.SetTextureFromGlobal(0, EnvironmentDepthTextureProvider.DepthTextureID,
            EnvironmentDepthTextureProvider.DepthTextureID);
        _computeShader.SetMatrixArray(EnvironmentDepthTextureProvider.ReprojectionMatricesID,
            Shader.GetGlobalMatrixArray(EnvironmentDepthTextureProvider.Reprojection3DOFMatricesID));
        _computeShader.SetVector(EnvironmentDepthTextureProvider.ZBufferParamsID,
            Shader.GetGlobalVector(EnvironmentDepthTextureProvider.ZBufferParamsID));
    }

    private void OnDestroy()
    {
        if(_resultsCB != null)
            _resultsCB.Release();
    }
}

Debugging in Unity Editor

Works perfect when build and run on the Quest 3.

Is there any way to make it work in the Unity editor to debug my applications?
I have Passthrough over Oculus Link and Share Point Cloud over Oculus Link activated, but the right eye projection seems off and all objects with occlusion shader disappear, regardless of if they are occluded or not.

Possible to do collision detection?

Hi all,

Is it possible to detect collisions between virtual and real-world objects using Depth API?

I was hoping it's possible to estimate this, e.g., by checking if their depth values are close to each other, which would indicate that they might be collapsing. Is there a way to do this easily?

I think Depth API doesn't currently support meshes for physics-based interactions, but I thought there might be an easier way to check this, since we can compute depth already.

Thank you and I look forward to hearing from you!

"Redifinition of _Time" custom shader

Hi, as I was trying to integrate occlusion into a custom URP shader using this repo, i've got this error (on a shader that worked perfectly before) : Shader error in 'Unlit/360ImagePortal_custom': redefinition of '_Time' at /DevPerso/Fenix/Demo/Library/PackageCache/[email protected]/ShaderLibrary/UnityInput.hlsl(40) (on d3d11)

And I can't figure out why. Here's the shader.

Shader "Unlit/360ImagePortal_custom"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100

        Stencil{
        Ref 2
        Comp Equal
        Pass Keep
        }

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            // make fog work
            #pragma multi_compile_fog

            // DepthAPI Environment Occlusion
            #pragma multi_compile _ HARD_OCCLUSION SOFT_OCCLUSION

            #include "UnityCG.cginc"
            #include "Packages/com.meta.xr.depthapi.urp/Shaders/EnvironmentOcclusionURP.hlsl"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                UNITY_FOG_COORDS(1)
                float4 vertex : SV_POSITION;
                
                float4 positionNDC : TEXCOORD0;

                UNITY_VERTEX_INPUT_INSTANCE_ID
                UNITY_VERTEX_OUTPUT_STEREO // required for stereo
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2f vert (appdata v)
            {
                v2f o;
        
                UNITY_SETUP_INSTANCE_ID(v);
                UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o); // required to support stereo
        
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                UNITY_TRANSFER_FOG(o,o.vertex);
        
                float4 ndc = o.vertex * 0.5f;
                o.positionNDC.xy = float2(ndc.x, ndc.y * _ProjectionParams.x) + ndc.w;
                o.positionNDC.zw = o.vertex.zw;
        
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(i);
        
                // sample the texture
                fixed4 col = tex2D(_MainTex, i.uv);
                // apply fog
                UNITY_APPLY_FOG(i.fogCoord, col);

                // calculate UV for the depth texture lookup for occlusions
                float2 uv = i.positionNDC.xy / i.positionNDC.w;

                // pass UV and the current depth of the texel
                float occlusionValue = CalculateEnvironmentDepthOcclusion(uv, i.vertex.z);

                // consider early rejection to not write to depth if it's an opaque shader
                if (occlusionValue < 0.01)
                {
                    discard;
                }

                // premultiply color and alpha by occlusion value
                // when it's 1 - color is not affected - virtual covers real
                // when it's 0 - texel is invisible - virtual is under real
                // when it's in between - texel is semi transparent
                col *= occlusionValue;
        
                return col;
            }
            ENDCG
        }
    }
}

Environment Depth Occlusion shifts the occlusion when the MRUK room is rotated

So I am using the scanned room with MRUK. For specific purposes, I have to rotate the room so that the desk aligns with the default coordinate system in unity.
However, doing this causes my depth occlusion to act weirdly, creating occlusions in shifted locations. The image below happens when the room is rotated by 10 degrees on the Y-axis.

Ekran görüntüsü 2024-06-25 171931

Does anybody know how to solve this problem?
So far I tried adjusting the Custom Tracking Space Transform in the Environment Depth Texture Provider script but it does not seem to work.

please help me support Custom SurfaceShader

I don't know how to modify the surfaceshader to support depth occlusion

Shader "Custom/DepthSurfaceShader"
{
Properties
{
_Color ("Color", Color) = (1,1,1,1)
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Glossiness ("Smoothness", Range(0,1)) = 0.5
_Metallic ("Metallic", Range(0,1)) = 0.0
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200

    CGPROGRAM
    // Physically based Standard lighting model, and enable shadows on all light types
    #pragma surface surf Standard fullforwardshadows

    // Use shader model 3.0 target, to get nicer looking lighting
    #pragma target 3.0

    sampler2D _MainTex;

    struct Input
    {
        float2 uv_MainTex;
    };

    half _Glossiness;
    half _Metallic;
    fixed4 _Color;

    // Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
    // See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
    // #pragma instancing_options assumeuniformscaling
    UNITY_INSTANCING_BUFFER_START(Props)
        // put more per-instance properties here
    UNITY_INSTANCING_BUFFER_END(Props)

    void surf (Input IN, inout SurfaceOutputStandard o)
    {
        // Albedo comes from a texture tinted by color
        fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
        o.Albedo = c.rgb;
        // Metallic and smoothness come from slider variables
        o.Metallic = _Metallic;
        o.Smoothness = _Glossiness;
        o.Alpha = c.a;
    }
    ENDCG
}
FallBack "Diffuse"

}
Looking forward to reply thank you!

Unity OpenXR Plugin support roadmap

The documented requirements for DepthAPI say that we need to use the Oculus XR Plugin. Unfortunately, our application needs to use the OpenXR plugin due to other dependencies which means we can't use the Depth API.

Is there a roadmap for when we would expect the Depth API to be supported using the OpenXR Plugin?

There was a Meta article written in 2021 stating "The Unity OpenXR plugin will be fully supported in early 2022 (Unity 2021 LTS) and will become the recommended path for Unity developers"

Is that still the case?

Updating to v61 breaks Unity

Unity 2022.3.7f1
Meta XR SDK v65
Windows 11

So I had my project with both Depth API packages on v60, today I had the great idea to click on the update button in the first Depth API package, to download v61, when it was done, the editor frooze, I could click on it on the task bar to minimize it but nothing else, also editor didn't go in "not responding state", I closed the editor and when the editor is loading it stops and show "unity no valid user created or default window layout found...." with button options for Quit or reset factory setitngs.

Clicking on the reset option does nothing and the message box reappears immediately, I found on a forum the suggestion to delete the file Library/ArtifactDB, this worked and after some loading time, the editor opened again just with layout reseted.

Conflicts with the official Oculus Integration package

Hello, will the DepthAPI get integrated into the official Oculus Integration kit in the near future? It seems like trying to merge this with preexisting projects often results in plugin or code conflicts. Otherwise, the occlusion shader from this API works fantastically and this will most likely be quintessential in future mixed reality applications. As it currently stands, mixed reality mode doesn't play nicely with large dynamic or static objects as they will impede vision of surrounding physical objects or people; these are both a safety and an aesthetic concerns, which is probably why this is one of the more prioritized features in Apple's vision pro.

Combining Shaders

I am not sure how exactly to follow the Implementing Occlusion in custom shaders steps to combine a custom shaders with this one. All my attempts and people I've asked for help have been faliures. I think it would help a lot if there was an example. Here is the shader I am trying to combine it with/add the steps mentioned in part 8 of getting started to. I would appreciate it if you could show me how to add the steps.

What im trying to do is occlude the shadow's drawn on the global mesh by this custom shader. Heres and example photo with the objects and shadows underlined.
com DefaultCompany MiniGolfMR-20231123-210251

Error: DepthAPI: no environment texture

I'm getting this error DepthAPI: no environment texture from EnvironmentDepthTextureProvider.cs:108, when running my game. Seems to work sometimes. Sometimes not.
The result is that the custom shader is invisible/only showing the pass-through camera-texture.

Any ideas what could cause this error?

Unity package manager issue

There seems to be an issue with the package manager in certain Unity versions (i.e. 2022.3.16). If anyone is experiencing unexpected behavior with their Unity projects that use meta sdks, check the package manager and look for errors under these packages. It looks like this:

image

For some reason certain Unity versions are having problems with meta packages.

Solution: just close the editor, sign out of Unity and then sign back in.

v62 update issues with depth-api

Everything was working well with the depth-api project. I just updated the standalone and pcvr version to public test channel and am all updated to v62. I have the hand tracking enabled and it works on standalone in v62 . I have the developer runtime features also enabled just as before.

But when I launch the depth-api project it no longer shows the hands over pcvr link on v62. I tried a fresh install also and still doesn't work since this update, any ideas?

Compatibility with the Meta MRUK Scene HighlightsAndShadow shader

This is not as much as an issue as a feature request. Meta does provide a shader for room mesh to act as a shader receiver but I cant get it to work with the DEPTH api for proper occlusion (shadows render behind walls etc). So far I wasnt sucessfull in importing the DEPTH api shader into the MRUK one in Unity 6. Will there be any support for this kind of feature in the future? Thanks
Meta Shadow Shader in question:
image

// Copyright(c) Meta Platforms, Inc. and affiliates.
// All rights reserved.
//
// Licensed under the Oculus SDK License Agreement (the "License");
// you may not use the Oculus SDK except in compliance with the License,
// which is provided at the time of installation or download, or which
// otherwise accompanies this software in either electronic or hard copy form.
//
// You may obtain a copy of the License at
//
// https://developer.oculus.com/licenses/oculussdk/
//
// Unless required by applicable law or agreed to in writing, the Oculus SDK
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//test
Shader "Meta/MRUK/Scene/HighlightsAndShadows"
{
    Properties
    {
        _ShadowIntensity ("Shadow Intensity", Range (0, 1)) = 0.8
        _HighLightAttenuation ("Highlight Attenuation", Range (0, 1)) = 0.8
        _HighlightOpacity("Highlight Opacity", Range (0, 1)) = 0.2
    }

    SubShader
    {
        PackageRequirements
        {
            "com.unity.render-pipelines.universal"
        }
        Tags
        {
            "RenderPipeline"="UniversalPipeline" "Queue"="Transparent"
        }
        Pass
        {

            Name "ForwardLit"
            Tags
            {
                "LightMode" = "UniversalForward"
            }

            Blend One OneMinusSrcAlpha
            ZTest LEqual
            ZWrite Off

            HLSLPROGRAM
            #pragma vertex ShadowReceiverVertex
            #pragma fragment ShadowReceiverFragment

            // GPU Instancing
            #pragma multi_compile_instancing
            #pragma instancing_options renderinglayer

            // Universal Pipeline keywords
            #pragma multi_compile _ _MAIN_LIGHT_SHADOWS _MAIN_LIGHT_SHADOWS_CASCADE _MAIN_LIGHT_SHADOWS_SCREEN
            #pragma multi_compile _ _ADDITIONAL_LIGHTS_VERTEX _ADDITIONAL_LIGHTS
            #pragma multi_compile_fragment _ _ADDITIONAL_LIGHT_SHADOWS

            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Lighting.hlsl"

            float _HighLightAttenuation;
            float _ShadowIntensity;
            float _HighlightOpacity;

            struct Attributes {
                float4 positionOS : POSITION;
                float3 normal : NORMAL;
                UNITY_VERTEX_INPUT_INSTANCE_ID
            };

            struct Varyings {
                float4 positionCS : SV_POSITION;
                float3 normalWS : NORMAL;
                float3 positionWS : TEXCOORD0;

                UNITY_VERTEX_INPUT_INSTANCE_ID
                UNITY_VERTEX_OUTPUT_STEREO
            };

            Varyings ShadowReceiverVertex(Attributes input) {
                Varyings output;
                UNITY_SETUP_INSTANCE_ID(input);
                UNITY_TRANSFER_INSTANCE_ID(input, output);
                UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(output);

                const VertexPositionInputs vertexInput = GetVertexPositionInputs(input.positionOS.xyz);
                output.positionCS = vertexInput.positionCS;
                output.positionWS = vertexInput.positionWS;
                output.normalWS = normalize(mul(unity_ObjectToWorld, float4(input.normal, 0.0)).xyz);
                return output;
            }

            half4 ShadowReceiverFragment(const Varyings input) : SV_Target {
                UNITY_SETUP_INSTANCE_ID(input);
                UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);

                half3 color = half3(0, 0, 0);
                half mainLightShadowAttenuation;

                // Main light shadows.
                VertexPositionInputs vertexInput = (VertexPositionInputs)0;
                vertexInput.positionWS = input.positionWS;
                const float4 shadowCoord = GetShadowCoord(vertexInput);
                mainLightShadowAttenuation = MainLightRealtimeShadow(shadowCoord);
                half alpha = (1 - mainLightShadowAttenuation) * _ShadowIntensity;

                //Additional lights highlights.
                float lightAlpha = 0;
                for (int i = 0; i < GetAdditionalLightsCount(); i++) {
                    Light light = GetAdditionalLight(i, input.positionWS, float4(0, 0, 0, 0));
                    float ndtol = saturate(dot(light.direction, input.normalWS));
                    lightAlpha = light.distanceAttenuation * ndtol * _HighLightAttenuation * light.shadowAttenuation;
                    color += light.color * lightAlpha * (1-alpha);
                }
                return half4(color, alpha + (lightAlpha * _HighlightOpacity));
            }
            ENDHLSL
        }
        Pass
        {
            Name "ShadowCaster"
            Tags{"LightMode" = "ShadowCaster"}

            ZWrite On
            ZTest LEqual

            HLSLPROGRAM
            // Required to compile gles 2.0 with standard srp library
            #pragma prefer_hlslcc gles
            #pragma exclude_renderers d3d11_9x
            #pragma target 2.0

            // -------------------------------------
            // Material Keywords
            #pragma shader_feature _ALPHATEST_ON

            //--------------------------------------
            // GPU Instancing
            #pragma multi_compile_instancing
            #pragma shader_feature _SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A

            #pragma vertex ShadowPassVertex
            #pragma fragment ShadowPassFragment

            #include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
            #include "Packages/com.unity.render-pipelines.universal/Shaders/ShadowCasterPass.hlsl"
            ENDHLSL
        }
    }

    SubShader
    {
        Tags
        {
            "Queue"="AlphaTest"
        }

        //Accumulate point light contribution
        Pass
        {
            Name "PointLight Contribution"
            Tags
            {
                "LightMode" = "ForwardAdd"
            }
            ZWrite Off
            ZTest LEqual
            Blend One OneMinusSrcAlpha

            Cull Back

            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile_fwdadd_fullshadows

            #include "UnityCG.cginc"
            #include "UnityLightingCommon.cginc"
            #include "AutoLight.cginc"

            uniform float _ShadowIntensity;
            uniform float _HighLightAttenuation;
            uniform float _HighlightOpacity;

            struct appdata
            {
                float4 vertex : POSITION;
                float3 normal : NORMAL;
                UNITY_VERTEX_INPUT_INSTANCE_ID
            };

            struct v2f {
                float4 pos : SV_POSITION;
                float3 normal : TEXCOORD0;
                float3 worldPos : TEXCOORD1;
                LIGHTING_COORDS(2, 3)
                UNITY_VERTEX_OUTPUT_STEREO
            };

            v2f vert(appdata v) {
                v2f o;
                UNITY_SETUP_INSTANCE_ID(v);
                UNITY_INITIALIZE_OUTPUT(v2f, o);
                UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
                o.pos = UnityObjectToClipPos(v.vertex);
                o.worldPos = mul(unity_ObjectToWorld, v.vertex);
                o.normal = UnityObjectToWorldNormal(v.normal);
                TRANSFER_VERTEX_TO_FRAGMENT(o);
                return o;
            }

            struct Light {
                half3 direction;
                fixed4 color;
                float distanceAttenuation;
            };

            Light getLight(v2f i) {
                Light light;
                float3 dir;

                #if defined(POINT) || defined(POINT_COOKIE) || defined(SPOT)
                    dir = normalize(_WorldSpaceLightPos0.xyz - i.worldPos);
                #else
                    dir = _WorldSpaceLightPos0.xyz;
                #endif

                light.direction = dir;
                light.color = _LightColor0;
                UNITY_LIGHT_ATTENUATION(attenuation, 0, i.worldPos);
                light.distanceAttenuation = attenuation;
                return light;
            }

            fixed4 frag(v2f i) : COLOR {
                UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(i);
                Light light = getLight(i);
                float ndtol = max(0.0, dot(i.normal, light.direction));
                float lightContribution = light.distanceAttenuation * _HighLightAttenuation * ndtol * light.color.w;
                float4 color = light.color * lightContribution;
                float alpha = lightContribution * _HighlightOpacity;
                return fixed4(color.r, color.g, color.b, alpha);
            }
            ENDCG
        }

        //Apply shadow attenuation for the main directionalLight
        Pass
        {
            Name "DirectionalShadows"
            Tags
            {
                "LightMode" = "ForwardBase"
            }
            ZWrite Off
            Cull Back
            Blend SrcAlpha OneMinusSrcAlpha
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile_fwdadd_fullshadows

            #include "UnityCG.cginc"
            #include "AutoLight.cginc"

            uniform float _ShadowIntensity;
            uniform float _DepthCheckBias;

            struct appdata
            {
                float4 vertex : POSITION;
                float3 normal : NORMAL;
                UNITY_VERTEX_INPUT_INSTANCE_ID
            };

            struct v2f {
                float4 pos : SV_POSITION;
                float3 normal : TEXCOORD0;
                float3 worldPos : TEXCOORD1;
                LIGHTING_COORDS(2, 3)
                UNITY_VERTEX_OUTPUT_STEREO
            };

            v2f vert(appdata v) {
                v2f o;
                UNITY_SETUP_INSTANCE_ID(v);
                UNITY_INITIALIZE_OUTPUT(v2f, o);
                UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
                o.pos = UnityObjectToClipPos(v.vertex);
                o.worldPos = mul(unity_ObjectToWorld, v.vertex);
                o.normal = UnityObjectToWorldNormal(v.normal);
                TRANSFER_VERTEX_TO_FRAGMENT(o);
                return o;
            }

            fixed4 frag(v2f i) : COLOR {
                UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(i);
                float attenuation = UNITY_SHADOW_ATTENUATION(i, i.worldPos);
                float3 lightDirection = _WorldSpaceLightPos0.xyz;
                float ndtol = dot(i.normal, lightDirection);
                int directionCheck = step(0,ndtol);
                float alpha = (1 - attenuation) * _ShadowIntensity * directionCheck;
                return fixed4(0, 0, 0, alpha);
            }
            ENDCG
        }

        // Cast shadows
        Pass {
            Name "ShadowCaster"
            Tags { "LightMode" = "ShadowCaster" }

            ZWrite On
            ZTest LEqual

            CGPROGRAM
            #pragma target 2.0

            #pragma shader_feature_local _ _ALPHATEST_ON _ALPHABLEND_ON _ALPHAPREMULTIPLY_ON
            #pragma shader_feature_local _METALLICGLOSSMAP
            #pragma shader_feature_local_fragment _SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A
            #pragma skip_variants SHADOWS_SOFT
            #pragma multi_compile_shadowcaster

            #pragma vertex vertShadowCaster
            #pragma fragment fragShadowCaster

            #include "UnityStandardShadow.cginc"

            ENDCG
        }
    }
    Fallback "Off"
}

DepthAPI and MetaXR simulator

Hi there,
I'm trying to use MetaXR simulator to test DepthAPI behaviours on my shaders. From simulator v65 up it is said on release notes that should be possible to emulate DepthAPI with no changes in code but looks like it doesn't. For instance Utils.GetEnvironmentDepthSupported() does return false on the meta xr simulator and this deactivates EnvironmentDepthTextureProvider.cs component that does not populate the depth texture for the shaders.
Am I missing something?

A.

Build Issue

Hello, when I try to build, the DepthAPI seems to only work on the device where I build it; if I try to download the app on another device, it doesn't work. I believe it might be due to some permissions in developer mode. I have also added several permission requests in the manifest, but I'm not sure if that's enough or if something is simply missing. Does anyone know what this might depend on?
Here's a copy of my manifest:

<?xml version="1.0" encoding="utf-8" standalone="no"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" android:installLocation="auto">
  <application android:label="@string/app_name" android:icon="@mipmap/app_icon" android:allowBackup="false">
    <activity android:theme="@android:style/Theme.Black.NoTitleBar.Fullscreen" android:configChanges="locale|fontScale|keyboard|keyboardHidden|mcc|mnc|navigation|orientation|screenLayout|screenSize|smallestScreenSize|touchscreen|uiMode" android:launchMode="singleTask" android:name="com.unity3d.player.UnityPlayerActivity" android:excludeFromRecents="true" android:exported="true">
      <intent-filter>
        <action android:name="android.intent.action.MAIN" />
        <category android:name="android.intent.category.LAUNCHER" />
        <category android:name="com.oculus.intent.category.VR" />
      </intent-filter>
      <meta-data android:name="com.oculus.vr.focusaware" android:value="true" />
    </activity>
    <meta-data android:name="unityplayer.SkipPermissionsDialog" android:value="false" />
    <meta-data android:name="com.samsung.android.vr.application.mode" android:value="vr_only" />
    <meta-data android:name="com.oculus.supportedDevices" android:value="quest|quest2|questpro" replace="android:value" />
  </application>
  <uses-feature android:name="android.hardware.vr.headtracking" android:version="1" android:required="true" />
  <uses-feature android:name="oculus.software.handtracking" android:required="false" />
  <uses-permission android:name="com.oculus.permission.HAND_TRACKING" />
  <uses-feature android:name="com.oculus.experimental.enabled" android:required="true" />
  <uses-permission android:name="com.oculus.permission.USE_ANCHOR_API" />
  <uses-feature android:name="com.oculus.feature.PASSTHROUGH" android:required="true" />
  <uses-permission android:name="com.oculus.permission.USE_SCENE" />
</manifest>

Save depth texture as image

I was trying to save depth texture obtained from DepthTextureProvider as jpg images. Here is what I did:

  1. RenderTexture.active = depthTexture.
  2. Create a Texture2D and call ReadPixels with Rect.
  3. Save image via EncodeToJPG and WriteAllBytes.

However, the image I got is either black or white, while I can visualize depth image in Quest.

Here is the script I use:
`using UnityEngine;
using UnityEngine.UI;
using Meta.XR.Depth;
using TMPro;
using UnityEditor;
using System.IO;

namespace SolerSoft.Customization
{
///


/// Renders a Meta depth map into Unity UI
///

public class RenderDepthMap : MonoBehaviour
{
// #region Constants
// private static readonly int VirtualDepthTextureID = Shader.PropertyToID("_CameraDepthTexture");
// #endregion // Constants
// Define some variables for program
#region Private Variables
private int counter = 0;
#endregion

    #region Unity Inspector Variables
    [SerializeField]
    [Tooltip("The RawImage where the physical depth map will be displayed.")]
    private RawImage m_physicalDepthImage;

    // [SerializeField]
    // [Tooltip("The RawImage where the virtual depth map will be displayed.")]
    // private RawImage m_virtualDepthImage;

    [SerializeField]
    [Tooltip("The depth texture provider.")]
    private EnvironmentDepthTextureProvider m_depthTextureProvider;
    #endregion // Unity Inspector Variables

    #region Private Methods
    /// <summary>
    /// Attempts to get any unassigned components.
    /// </summary>
    /// <returns>
    /// <c>true</c> if all components were satisfied; otherwise <c>false</c>.
    /// </returns>
    private bool TryGetComponents()
    {
        if (m_depthTextureProvider == null) { m_depthTextureProvider = GetComponent<EnvironmentDepthTextureProvider>(); }

        // All satisfied?
        //return m_physicalDepthImage != null && m_virtualDepthImage != null && m_missingDepthText != null && m_depthTextureProvider != null;
        return m_physicalDepthImage != null && m_depthTextureProvider != null;
    }

    /// <summary>
    /// Attempts to show the depth textures.
    /// </summary>
    /// <returns>
    /// <c>true</c> if the textures were shown; otherwise <c>false</c>.
    /// </returns>
    private bool TryShowTextures()
    {
        // Attempt to get the global depth texture
        // This should be a image, get a image and send via redis?
        var physicalDepthTex = Shader.GetGlobalTexture(EnvironmentDepthTextureProvider.DepthTextureID);
        if (physicalDepthTex != null)
        {
            m_physicalDepthImage.enabled = true;
            m_physicalDepthImage.texture = physicalDepthTex;

            RenderTexture.active = (RenderTexture)physicalDepthTex;
            Texture2D copyed = new Texture2D(physicalDepthTex.width, physicalDepthTex.height);
            copyed.ReadPixels(new Rect(0, 0, physicalDepthTex.width, physicalDepthTex.height), 0, 0);
            copyed.Apply();
            RenderTexture.active = null;
            // Save the texture to a file
            string path = "DepthMap"+ counter+".png";
            counter++;
            byte[] bytes = copyed.EncodeToPNG();
            File.WriteAllBytes(path, bytes);
            
            // Is it possible to save raw texture data to a file?
            // Save the texture to a file

            return true;
        }
        else
        {
            m_physicalDepthImage.enabled = false;
            return false;
        }
    }
    
    #endregion // Private Methods

    #region Unity Message Handlers
    /// <summary>
    /// Start is called before the first frame update
    /// </summary>
    protected void Start()
    {
        if (!TryGetComponents())
        {
            Debug.LogError("Missing components, disabling.");
            enabled = false;
        }
    }

    /// <summary>
    /// Update is called once per frame
    /// </summary>
    protected void Update()
    {
        // Attempt to show the render textures
        TryShowTextures();
    }
    #endregion // Unity Message Handlers
}

}`

Hand removal broken in Unity 2022.3.27

For some reason updating to 2022.3.27 broke the hand removal feature. The code still runs and there is no error, the depth occlusion is also applied to the hands now.

Make subgraph switchable precision

It would be good if the subgraph for urp has switchable precision. This uses single or half depending on in which shader it is used. This could give a small performance bump (and the hlsl code is already implemented I saw)

-

ops, nvm

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.