Giter Site home page Giter Site logo

easysynth's People

Contributors

fpanjevic avatar nikolajov96 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

easysynth's Issues

Export mesh aligned with camera pose

Hi,

I tried to export a scene mesh but I found it not aligned with the camera poses. Do anyone know if there a way to export the mesh keeping the mesh origin consistent with the camera poses?

v4.0.1 is not work

Describe the bug
Release v4.0.1 is not work under UE 5.3.2

To Reproduce
I have followed the installation guide

Expected behavior
Unreal project opens without any problems.

Screenshots
Following error occurs while opening the project.

Assertion failed: ModuleManager.IsModuleLoaded(ModuleName) [File:D:\build\++UE5\Sync\Engine\Source\Runtime\Core\Public\Modules\ModuleManager.h] [Line: 309] Tried to get module interface for unloaded module: 'ConsoleVariablesEditor'

UnrealEditor_MovieRenderPipelineSettings
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_EasySynth!USequenceRenderer::USequenceRenderer() [C:\Users\jovan\Desktop\EasySynth\HostProject\Plugins\EasySynth\Source\EasySynth\Private\SequenceRenderer.cpp:86]
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_CoreUObject
UnrealEditor_Core
UnrealEditor_Core
UnrealEditor_Projects
UnrealEditor_Projects
UnrealEditor
UnrealEditor
UnrealEditor
UnrealEditor
UnrealEditor
UnrealEditor
UnrealEditor
kernel32
ntdll

Configuration (if applicable):

  • OS: Windows 11
  • GPU: NVIDIA RTX 3070 Ti
  • UE version: 5.3.2
  • Used template: Game with C++

Rendering order issue

Describe the bug
Rendered image order towards the end of the sequences doesn't match camera motion. E.g. frame 100 appears to be behind frame 99, even though the camera is moving forward.

However, poses reported in CSV output seem to be in the correct order - suggesting this could be a race condition in the rendering queue.

To Reproduce
Steps to reproduce the behavior:

  1. Render an EasySynth sequence with Color images only.
  2. Iterate through rendered images visually and look for order anomalies.

Expected behavior
Images should appear to be following a single smooth motion.

Screenshots
If applicable, add screenshots to help explain your problem.

Configuration (if applicable):

  • OS: Ubuntu 20.04, UE5
  • GPU: GeForce 1080Ti

Additional context
Add any other context about the problem here.

EXR Format

I wanna know if this plugin works well with EXR format.

Way to Auto annotate our synthesized data

Is there a way to automatically annotate our synthesized images? I understand we can render semantically segmented images, but those images in the output are simply just colored, rather than labeled. Is there a way we can generate some sort of annotation for the dataset we synthesize?

Semantic Segmentation window

Hi @NikolaJov96,

Sorry if it's a stupid question, but is there a way to resize the semantic segmentation window? I can't find a way to resize it or to get a scroll bar.
Thanks!

Noel
Screenshot 05-30-2023 15 29 29

Rendering Additional Images with View Modes

Hi, Thanks for the great work.

I am new to unreal so my questions may not be making sense and thanks for the patience in advance.

  1. My project may need rendering unlit images and wonder if there are anyway to render unlit or other view modes from Unreal with EasySynth?
  2. This may be related to Q1, I'm wondering if there are any references about how EasySynth communicates with Unreal to pull the RGB, depth data, etc?

Depth map rendering in UE5 on Ubuntu causes a crash

Describe the bug
Depth map rendering causes a crash in UE5

To Reproduce
Steps to reproduce the behavior:

  1. Create an scene in UE5
  2. Select Color images and Depth images in EasySynth
  3. Run rendering
  4. Once color image rendering completes, UE5 will crash.

Expected behavior
Color and depth images rendered.

Screenshots
If applicable, add screenshots to help explain your problem.

Configuration (if applicable):

  • OS: Ubuntu 20.04
  • GPU: GeForce 1080Ti

Additional context
Crash reports with stack traces attached.

MyProject2.log

Windows build error using UE5

Describe the bug
Hi, I'm trying to insall the EasySynth plugin under Windows / UE5, but when building the engine with the plugin, I get following error:
'UMoviePipelineExecutorBase *UMoviePipelineQueueSubsystem::RenderQueueWithExecutor(TSubclassOf<UMoviePipelineExecutorBase>)': cannot convert argument 1 from 'const FSoftClassPath' to 'TSubclassOf<UMoviePipelineExecutorBase>' UE5 xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\SequenceRenderer.cpp 406
If I remove EasySynth from my Plugins folder, the build runs without issues. Did you test the Plugin under Windows in UE5 yet? Is there something special I need to consider? Thanks!

Here's the full output:
1>[3/5] Compile Module.EasySynth.cpp 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\SequenceRenderer.cpp(406): error C2664: 'UMoviePipelineExecutorBase *UMoviePipelineQueueSubsystem::RenderQueueWithExecutor(TSubclassOf<UMoviePipelineExecutorBase>)': cannot convert argument 1 from 'const FSoftClassPath' to 'TSubclassOf<UMoviePipelineExecutorBase>' 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\SequenceRenderer.cpp(406): note: No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called 1>xxx\UnrealEngine\Engine\Plugins\MovieScene\MovieRenderPipeline\Source\MovieRenderPipelineEditor\Public\MoviePipelineQueueSubsystem.h(45): note: see declaration of 'UMoviePipelineQueueSubsystem::RenderQueueWithExecutor' 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\Widgets\SemanticClassesWidgetManager.cpp(73): warning C4996: 'SColorBlock::FArguments::IgnoreAlpha': IgnoreAlpha is deprecated. Set AlphaDisplayMode to EColorBlockAlphaDisplayMode::Ignore instead Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile. 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\Widgets\SemanticClassesWidgetManager.cpp(282): warning C4996: 'SColorBlock::FArguments::IgnoreAlpha': IgnoreAlpha is deprecated. Set AlphaDisplayMode to EColorBlockAlphaDisplayMode::Ignore instead Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile. 1>xxx\UnrealEngine\Engine\Plugins\EasySynth\Source\EasySynth\Private\Widgets\WidgetManager.cpp(380): warning C4996: 'FAssetData::ObjectPath': FName asset paths have been deprecated. Use GetSoftObjectPath to get the path this asset will use in memory when loaded or GetObjectPathString() if you were just doing ObjectPath.ToString() Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile.

To Reproduce
Steps to reproduce the behavior:

  1. Build UE5 from source under Windows 10 / Visual Studio 2019
  2. Clone the EasySynth repo to \Engine\Plugins and checkout branch "ue5"
  3. Open \UE5.sln in Visual Studio 2019, set solution configuration to Developer Editor and Platform to Win64
  4. Build UE5
  5. The error appears during compilation

Expected behavior
I can build UE5 with the EasySynth plugin without an error.

Configuration (if applicable):

  • OS: Windows 10 Pro
  • GPU: NVIDIA GeForce GTX 1080 Ti

Empty PNG output with PostProcess Material used after Tonemapping

Describe the bug

png output are empty for all of types of renders.

To Reproduce

Steps to reproduce the behavior:

  1. Check main branch.
  2. Create new unreal scene.
  3. Add any object (sphere, cube)
  4. Add new sequencer with camera
  5. Use the plugin as stated in manual
  6. The png output is empty -> probably transparency is 100% (not sure)
    (this is example image)
    Run1_0_0000

Configuration (if applicable):

  • OS: Windows 10
  • GPU: Nvidia 3070
  • Unreal: 4.27
    Additional context
    The problem is connected with the tonemapper. When I change the mode of PPMaterial from "After Tonemapper" to "Before Tonemapper", the output exists. The problem is the same for the setting "Replace Tonemapper". Have you encountered the issue by yourself?

Normal vectors are not directional

Describe the bug
Normal axes are represented by RGB image channels. Currently, each axis contains an absolute value, making the normals seldom useful.

To Reproduce
Steps to reproduce the behavior:

  1. Render normals on any sequence

Expected behavior
Normal axis values are mapped from [-1, 1] range to [0, 1] using (x + 1) / 2 formula.

How to use optical flow data ?

I use this code to covert optical flow data.But it doesn't look right.
Here is my python code:

import cv2
import torch
import torch.nn as nn
import numpy as np
def img2flow(img,scale):
    '''
    :param img: cv img bgr
    :param scale: optical scale
    :return: dx dy tensor
    '''
    h = img.shape[0]
    w = img.shape[1]
    img = cv2.cvtColor(img,cv2.COLOR_BGR2HSV) #h is angle ,s is intensity
    ang = img[:,:,0] #* 2 *np.pi / 180
    mag = img[:,:,1]  / scale
    dx ,dy = cv2.polarToCart(mag,ang,angleInDegrees=True)
    dx = w * dx
    dy = h *dy
    dx = torch.Tensor(dx).unsqueeze(0).unsqueeze(0)
    dy = torch.Tensor(dy).unsqueeze(0).unsqueeze(0)
    #print(dx.shape)
    print(dx)
    print(dx.max())




    return dx ,dy
def read_img2tensor(path):
    img = cv2.imread(path)
    img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    img = img.transpose(2, 0, 1)
    tensor = torch.Tensor(img).float().unsqueeze(0)

    #1*3*h*w

    return tensor
def vis_tensor(winname,tensor):
    #tensor 1 3 h w
    mat = tensor.squeeze().detach().numpy()
    mat = np.uint8(mat)  # float32-->uint8
    mat = mat.transpose(1, 2, 0)  # mat_shape: (982, 814,3)
    mat = cv2.cvtColor(mat,cv2.COLOR_BGR2RGB)
    #cv2.imshow(winname,mat)
    cv2.imwrite(r'E:\data\EasySynth_test2/'+winname+'.jpg',mat)
    return 0

def warp(x, flo):
    """
    warp an image/tensor (im2) back to im1, according to the optical flow
    x: [B, C, H, W] (im2)
    flo: [B, 2, H, W] flow
    """
    B, C, H, W = x.size()
    # mesh grid
    xx = torch.arange(0, W).view(1, -1).repeat(H, 1)
    yy = torch.arange(0, H).view(-1, 1).repeat(1, W)
    xx = xx.view(1, 1, H, W).repeat(B, 1, 1, 1)
    yy = yy.view(1, 1, H, W).repeat(B, 1, 1, 1)
    grid = torch.cat((xx, yy), 1).float()

    # if x.is_cuda:
    #     grid = grid.cuda()
    vgrid = grid + flo

    # scale grid to [-1,1]
    vgrid[:, 0, :, :] = 2.0 * vgrid[:, 0, :, :].clone() / max(W - 1, 1) - 1.0
    vgrid[:, 1, :, :] = 2.0 * vgrid[:, 1, :, :].clone() / max(H - 1, 1) - 1.0

    vgrid = vgrid.permute(0, 2, 3, 1)
    output = nn.functional.grid_sample(x, vgrid,mode="bilinear",align_corners=False)
    mask = torch.autograd.Variable(torch.ones(x.size()))
    mask = nn.functional.grid_sample(mask, vgrid)

    # if W==128:
    # np.save('mask.npy', mask.cpu().data.numpy())
    # np.save('warp.npy', output.cpu().data.numpy())

    mask[mask < 0.9999] = 0
    mask[mask > 0] = 1

    return output * mask


if __name__ == '__main__':
    #'exr'
    path0 = r'E:\data\EasySynth_test\ColorImage/'+'testSeq.0008.jpeg'
    path1 = r'E:\data\EasySynth_test\ColorImage/' + 'testSeq.0007.jpeg'
    #flow_path = r'E:\data\EasySynth_test\OpticalFlowImage_scale2/' + 'testSeq.0008.exr'
    flow_path = r'E:\data\EasySynth_test\OpticalFlowImage_scale1/' + 'testSeq.0008.exr'
    flow_img = cv2.imread(flow_path, cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH)
    #flow_img = cv2.imread(flow_path)
    print(flow_img.max())
    dx , dy = img2flow(flow_img,scale=1)
    flow = torch.cat((dx,dy),dim=1)

    #test
    #read img0 and img1
    img0 = read_img2tensor(path0)
    img1 = read_img2tensor(path1)
    img1_0 =warp(img1,flow)

    vis_tensor('0',img0)
    vis_tensor('0warp',img1_0)

testSeq.0007.jpeg:

testSeq 0007
testSeq.0008.jpeg:
testSeq 0008
And after warp 7 to 8 with testSeq.0008.exr looks like this:
0warp
I also tried other ways of warp. such from 0008.jpeg warp to 0007.jpeg.
Is there something wrong?

Besides ,when I use .jpeg format to get optical flow data with replace code cv2.imread,the value will be far out of range.And in this case scale doesn't seem to be used correctly in my code.

Skeleton data generation for ST-GCN

Hello, thanks for sharing the EasySynth plugin.

I was wondering if there is any way to get real-time skeleton tree data.
For example, the X, Y, Z position/rotation of the spine start-end, head center etc.
I want to prepare skeleton data based on ST-GCN to experiment (test/learn).

Please help me.
Thank you.

Hair depth errors in MetaHumans

Thank you for your work. This plugin is very nice to use and it works in most scenarios.
Try to use it to get the depth of the digital human in the official MetaHuman project. But it seems that the depth at the hair is wrong. The depth value is between the background depth and the foreground depth, especially at the edges of the hair, or at the individual strands of hair.
This seems to be caused by a feature of Groom itself. I don't know if there is a way to fix it.
lSequence_t1 0010
lSequence_t1 0010
The maximum depth is set at 30m, the resolution 1080p, and the other settings remain the same as in the tutorial.

Expose plugin to python api

It would be very helpful if the plugin was also exposed through a python api.

That would help automate data generation via scripting, which could reduce errors etc.

Render multiple sequences

Hi! I would like to know if there’s the possibility to render multiple sequences automatically, instead of going one by one as it would save a lot of time. I’ve tried to create a python script but it’s impossible to run the plug in with it. Thanks!

Creating a stereo camera ground truth rig is difficult with camera's within blueprints or attached to other actors

Creating a stereo camera ground truth rig is difficult with cameras within blueprints or cine camera actors attached to other actors (see image below). In my workflow, I have attempted to use take recorder to record a vehicle blueprint with attached camera component and as well as using a level sequence and attaching a camera to a primary camera actor which I navigate in the level sequence. For the attached camera or camera within the blueprint, the camera poses can not be exported (EasySynt. The primary camera's camera pose in the example below can be exported.

image

image

custom post-process material support?

Hi Nikola!

I'm still using your tool everyday and it's a huge help for me! Thank you so much!
Would it be possible to get the option to select my own post process material in the EasySynth UI?
something like that:
EasySynth_customPP

So far, I've added my custom PP material to the EasySynthMoviePipelineConfig file. But it requires to restart Unreal to load the update, and only then EasySynth will render the new PP material but with each "targets" selected (i.e: if rendering Color images and Depth images, my custom PP material will be rendered 2 times, with the Color images and with the Depth images).
It's not the most friendly workflow, but it works. So, I'm wondering if this feature could be added to the plugin if it's not too much work?
Thank you!
Noel.

Randomize camera trajectory?

Hi,
I am wondering if there are ways to create camera trajectories randomly so one can generate large amount of data efficiently rather than manually creating level sequences?
I tried to find some way to creating level sequences randomly but did not have much luck. Any pointers would be really great. I also found this blog post. Section "Automatic navigation and screenshots" talks about how to automatic navigate through the scene. I am wondering if that would be a good way to go and if it will work with EasySynth?

Segmentation of Roads

Dear Developer!

I would like to ask you , if you can provide a workflow for segmenting Roads in Unreal Engine.
Generally , in most cases roads are generated along a spline and for example in case of "off-road" roads, its only a material not a mesh.

Can you provide a workflow how can I apply a class for roads generated along Splines?

Thanks in advance!

Regards,
Lóránt

Depth and normals render as RGB occasionally

Describe the bug
Instead of depth and normal images, UE renders RGB images.

To Reproduce
No consistent repro available. It happens somewhat randomly without a clear pattern.

Expected behavior
RGB, depth and normal images rendered.

Configuration (if applicable):

  • OS: Windows
  • UE version: 5.0.3
  • GPU: 1070

Schedule multiple level sequences?

UE's movie renders queue allows users to specify orders and render different level sequences one by one as shown by the image below. I am wondering if a similar functionality can be added for EasySyn or if it can be done already.
For one user case, I encountered, I sample camera poses with different ranges for different level sequences and augment the scene differently. It would be helpful to schedule the sequences so no intervention is needed during the rendering process.
movie_render_queue (1)

UE5 support for Mac

Any plans to support the UE5?

I would love to help with this, but I'm just starting with the UE altogether.

EasySynth MovieRender Queue and City Sample UE5.3

Describe the bug
Whenever I try to render a Sequence in one of the City Sample maps i get a:

[2024.07.04-06.14.33:000][611]LogEasySynth: UTextureStyleManager::OnLevelActorAdded: Adding actor 'BP_CrowdCharacter_C_15'
[2024.07.04-06.14.33:000][611]LogWindows: Error: appError called: Assertion failed: (Index >= 0) & (Index < ArrayNum) [File:D:\build++UE5\Sync\Engine\Source\Runtime\Core\Public\Containers\Array.h] [Line: 771]
Array index out of bounds: 1 from an array of size 1

UnrealEditor_Renderer
UnrealEditor_Engine
UnrealEditor
UnrealEditor
UnrealEditor_Core
UnrealEditor_Core
UnrealEditor_RenderCore
UnrealEditor_RenderCore
UnrealEditor_Core
UnrealEditor_Core
kernel32
ntdll

or:

Assertion failed: Index>=0 && Index<NumBits [File:D:\build++UE5\Sync\Engine\Source\Runtime\Core\Public\Containers\BitArray.h] [Line: 1410]

UnrealEditor_Renderer
UnrealEditor_Engine
UnrealEditor
UnrealEditor
UnrealEditor_Core
UnrealEditor_Core
UnrealEditor_RenderCore
UnrealEditor_RenderCore
UnrealEditor_Core
UnrealEditor_Core
kernel32
ntdll

I think it might have to do with MASS? Did anyone test this before?

To Reproduce
Steps to reproduce the behavior:
Download CitySample 5.3
Create Level Sequence
Run EasySynth
Crash happens when a render Style has finished e.g. Color/Depth/etc...

Expected behavior
I expect the renders to complete without any hiccups.

Screenshots
If applicable, add screenshots to help explain your problem.

Configuration (if applicable):

  • OS: Windows 10
  • GPU: RTX4090
  • Ryzen9 7950X3D
  • 96GB of DDR5 Ram

Additional context
I will test if the same error occurs in 5.2

The error is very obscure and I dont see where it could originate from. If I do a normal MovieRender it doesnt crash however. Also using the same sequence in the empty level in the citysample doesnt crash. It appears that the crash happens right after a render cycle is finished and it wants to go to the next. It needs to reset the actor locations maybe and do things are not supposed to work? Any help would be appreciated.

Depth map

Hi,

I found in the "Depth Images" under "Outputs' structure details", it stated, "Depth values are scaled between 0 and the specified Depth range value". I have the following questions:

  1. How exactly is the scaling done?
  2. Would it be different for '.png' and 'exr" representations?
  3. Besides, would it be possible to get the absolute distance of the object to the camera from (1). the depth image from EasySynth (2) other depth shaders?

how to get a better anti-aliasing?

Hi Nikola!

I just start using your plugin and it is amazing!

I have big issues with anti-aliasing (see attached image). Switching to different type of AA didn't help.
Do you have any suggestion to get a better AA? (like adding the AA option in the Movie Render Queue settings) ?
Thanks!

Noel.

test_01 0005

UE 5.0, 5.1
Windows 11 pro
RTX 4090

Optical flow for Niagara smoke

Hi,
Can you please confirm if EasySynth can generate optical flow for 3D Niagara smoke particles in UE5? If not, is there any workaround? I am trying to generate an optical flow simulated dataset of smoke plumes from a drone's perspective in a forest environment.

Multi camera rig generates the same output images while metadata is correct

Hello! Thank you for developing and supporting such handy tool. I have tried to use it with multi camera rig setup but meet some difficulties that looks like bug. May you please take a look or comment on this issue? Thanks!

Describe the bug
When setup empty actor with multiple cameras different output images are expected. Got the same image for every camera from rig.

To Reproduce
Steps to reproduce the behavior:

  1. Create empty actor BP and add multiple cameras to it, place on scene
  2. Create new sequence and add actor instance to it (e.g. by drag-n-drop)
  3. Capture sequence and save
  4. Generate images with created sequence
  5. All images for different cameras has the same view point

Expected behavior
Every camera generates image from corresponding view point as well as metadata.

Images are the same but metadata matches each camera.

Additional context
When debugged RigCamera[0] changes its RelativeTransform after RigCameras[0]->SetRelativeTransform(OriginalCameraTransform); but images are from the same PoV
Also exactly before &USequenceRenderer::StartRendering camera has changed RelativeLocation and next FRendererTarget::GetCameras() call it already have default location. If FoV changed, it remains the same.

When cameras have different FoV images with different FoV rendered but PoV still the same.
When rendered with Movie Render Queue and selected Camera->Render All Cameras set of images with different PoVs are generated.

Env
UE5.2.1, ToT
UE 4.27.2, v1.2.0

Post Process material support?

Hi Nikola!

Do you have any update on implementing post process material?

I see that EasySynth is using the default render settings from UE (UMovieRenderPipelineProjectSettings):
After a render with EasySynth, if I open the Movie Render Queue, I can see that in the "settings" tab that a temp file called "MoviePipelineMasterConfig_0" was created.
This temporary setting file seems to be the default settings file used by EasySynth to render.

Would it be possible for EasySynth to use a custom setting file instead of the default one?
A custom settings file in which we could turn on the Anti-aliasing and add any custom post process material? Or maybe overwrite the default one with custom settings...?
That would be awesome!
Thanks!

Noel.
Screenshot 05-25-2023 19 29 17

Color errors in semantic map (UE5)

Describe the bug
I only tested this in UE5. I'm not sure whether this is a bug or I'm doing something wrong, but for some colors, the Plugin will produce erroneous colors in the semantic map and consequently, pixels belonging to the same class will have color values different by one digit.

For example, I have a class to which I assign the RGB color (1.0, 0.502886, 0.0) using the Color Picker in UE. According to the SemanticClasses.csv, this class gets mapped to (255, 188, 0). However, in the final image, some pixels within the class are incorrectly assigned the value (255, 189, 0). This only happens for some color values.

To Reproduce
Steps to reproduce the behavior:

  1. Build UE5 and EasySynth ue5 branch from source
  2. Create level & open EasySynth Plugin
  3. Manage Semantic Classes, add a class and select a slightly odd color, e.g. RGB = (1.0, 0.502886, 0.0)
  4. Assign the semantic class to some objects
  5. Render (tested with jpg and png)

Expected behavior
All pixels belonging to one semantic class have the exact same

Screenshots
Color in Unreal:
image
Color after opening & displaying image using Python:
image
image

Configuration (if applicable):

  • OS: Windows 10 Pro
  • GPU: NVIDIA GeForce GTX 1080 Ti

Get 2d boundingbox

Is it possible to get 2d or 3d bounding box of the actors(e.g. vehicles, pedenstrains) in each frame?

[Feature Request] Support Programmatic Single Frame Capture

After exploring other options, I really like the implementation provided here.

However, the project seems constrained to sequence based (video like) workflows. Is there anyway to expose some of the internals for programmatic use?

Having simple library calls to provide a camera and trigger the capture of a single frame would open up a lot of use cases outside of sequence. For example, I'm looking to programmatically reset/perturb the same scene countless times over a range of parameters and capture an individual frame for each "run".

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.