Giter Site home page Giter Site logo

maximeraafat / blendernerf Goto Github PK

View Code? Open in Web Editor NEW
682.0 682.0 36.0 1.95 MB

Easy NeRF synthetic dataset creation within Blender

License: MIT License

Python 100.00%
addons ai blender computer-graphics computer-vision instant-ngp nerf neural-rendering python

blendernerf's People

Contributors

maximeraafat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

blendernerf's Issues

The output images didn't get whole scene of a model

Hi, thanks for your work:)

Hi, I tried the "PLAT COS"method, it looks fine, different views are great, but the images seem weired, they didn't capture the whole model like this:
image
I changed the camera settings, but still don't get the right images, could you give me some advices to get fine outputs?
And I wonder if it is possible ,could you kindly make a video on how to use this tool

How to automatic the process without using blender

Thank you for the wonderful work!

Now since the addon only works inside Blender, I have to manually open a 3D model, set parameters, and run the script. May I know whether I can automate this process by executing a Python script?

Thank you so much again!

Render from surface of arbitrary geometry:

In order to capture an oddly-shaped object (or an environment) it might be easiest to generate a "bounding volume" object that is basically a scaled-up (or down) low-detail version of the object/scene, and scatter camera positions on its surface (pointing into the center or out along its surface normals). This would provide a more flexible solution for scenes that either don't have an animated camera or don't conform neatly to the center of a sphere.

About the trajectory of camera

Hi, thanks for this great add-on

I wonder can the trajectory for this add-on, as "BlenderNeRF sphere", can be set as other shape?

The current default setting is a sphere, and can changed to a circle or a line, but can the trajectory be user-defined? Current sphere traj is suitable for object-level, but seems not a good choice when handling forward-facing scenes.

Scene is shifting?

I'm testing the add-on with Gaussian splatting It seems like the scene is shifting?

Are there any adjustments made on export? can we turn them off?

Add a visualization of the aabb cube

if the aabb volume is set to 1, is that a cube of 1x1x1? why is the default 4? It would be good to have a visual representation of the aabb box to understand what it is.

Request for Feature Addition: Simultaneous Rendering of Depth Maps with Optical Images and JSON Integration

Hello~

I hope this message finds you well. I am writing to express my sincere appreciation for your remarkable work. Your project has been instrumental in my work, and I have found it incredibly helpful.

Given the immense value your project has already provided, I would like to kindly request the addition of a new feature. It would be immensely beneficial if you could incorporate the functionality to 【 render depth maps simultaneously with optical images and integrate them into the JSON file format 】, as required by projects such as DS-NeRF.

Thank you for your time and dedication to your project.

Camera Poses are incorrect in Instant-NGP

Hello,

Thanks for writing this tool! It is very helpful. I generated a synthetic dataset using COS but when importing it into instant-ngp I find that the camera poses are translated much further from the unit cube. When using the same images and running COLMAP, the poses are correct. Is there any condition on the scene (centered at 0, must be scaled to be within 1x1x1 cube in Blender) that I am missing?

Thanks!

image

About the test images

Thanks for the tool which really helps a lot!

I wonder is there anyway to generate test rendering images together with the camera pose? I found that only testing camera information is generated, which makes me unable to calculate the metrics like PSNR when rendering on test camera pose. Is there a way that I can get these testing images just like the training images?

Thanks!

BlenderNeRF Camera not found

Hi, thank you for this meaningful tool.

I tried to render .blender models with BlenderNerf and it works well. However, when I made some modifications on the model (e.g. rotation of a part of it), I encounter errors:
sphere_camera = scene.objects[CAMERA_NAME]
KeyError: 'bpy_prop_collection[key]: key 'BlenderNeRF Camera' not found'

May you please look into it? Thank you in advance.

fixed cameras on COS

Hi, thanks for this great add-on.
I wonder if I could set fixed mutiple cameras to render a dynamic scene using the Camera on Sphere method?

glass ceiling on radius parameter

Hello,

Thank you very much for this wonderful tool!
I would like to generate a blender dataset in COS from an object. Unfortunately my object is large, and the radius parameter is limited to 1000m maximum. How can I change the maximum value of this radius?

generating test renderings.

Hi,
I wanted to use this plugin to generate training and test data for our Gaussian splatting implementation.

And it generates training data, but no test data. looking into the code, I saw in cos_operator, that it appears to only write the JSON file. is that possible at all?

thanks, adam

Is the release version folder name wrong?

Hi, I found that the release version's name will cause ModuleNotFoundError: No module named 'BlenderNeRF-2', to fix this I have to manually change the folder name to BlenderNeRF.

Is there any way to fix this problem before release? It's hard for a blender noob like me to figure out what happened.

how do I begin?

Hi, maybe I'm missing something. Can I feed images for them to be trained or should I come with an already trained set? In both cases I was not very clear how to begin with this.

Thanks!

Failed to reconstruct with SPEED+ dataset

graphdeco-inria/gaussian-splatting#719 (comment)
Hello, I have been working on 3D reconstruction using 3DGS recently. 3DGS can also be used to reconstruct with nerf datasets. However, I have been facing continuous reconstruction failures when using a public dataset SPEED+. Therefore, I would like to consult with you about the correctness of my camera pose transformation. I have encountered issues with the transformed SPEED+ dataset when training with nerf as well.

import json
import math
import numpy as np

Read the camera intrinsic parameters file.

with open('camera.json', 'r') as f:
camera_params = json.load(f)

Read the camera extrinsic parameters file.

with open(r'G:\speed+\shirtv1\roe2\roe2.json', 'r') as f:
extrinsic_params = json.load(f)

Extracting Camera Intrinsic Matrix

camera_matrix = np.array(camera_params['cameraMatrix'])
dist_coeffs = np.array(camera_params['distCoeffs'])

calculate camera_angle_x

Nu = camera_params["Nu"]
fx = camera_params["fx"]
camera_angle_x = 2 * math.atan(Nu / (2 * fx))

Create storage for storing Nerf format data.

frames = []

def quaternion_to_rotation_matrix(q):
q = np.array(q)
r = np.array([[1 - 2 * (q[2] ** 2 + q[3] ** 2), 2 * (q[1] * q[2] - q[0] * q[3]), 2 * (q[1] * q[3] + q[0] * q[2])],
[2 * (q[1] * q[2] + q[0] * q[3]), 1 - 2 * (q[1] ** 2 + q[3] ** 2), 2 * (q[2] * q[3] - q[0] * q[1])],
[2 * (q[1] * q[3] - q[0] * q[2]), 2 * (q[2] * q[3] + q[0] * q[1]), 1 - 2 * (q[1] ** 2 + q[2] ** 2)]])
return r

for param in extrinsic_params:
filename = param['filename']
q_vbs2tango_true = param['q_vbs2tango_true']
r_Vo2To_vbs_true = param['r_Vo2To_vbs_true']

# Create a rotation matrix 
rotation_matrix = np.eye(4)
rotation_matrix[:3, :3] = quaternion_to_rotation_matrix(q_vbs2tango_true)

# Construct a translation vector 
translation_vector = np.array(r_Vo2To_vbs_true).reshape(3, 1)

# Build a transformation matrix 
transform_matrix = np.eye(4)
transform_matrix[:3, :3] = rotation_matrix[:3, :3]
transform_matrix[:3, 3] = translation_vector.squeeze()

frame = {
    'file_path': filename,
    "rotation": camera_angle_x,
    'transform_matrix': transform_matrix.tolist()
}
frames.append(frame)

Prepare data in Nerf format

nerf_data = {
'camera_angle_x': float(camera_params['ppx']),
"rotation": frame["rotation"],
'frames': frames
}
#Write the data to a file
with open(r'G:\speed+\shirtv1\roe2\lightbox\roe1_to_nerf.json', 'w') as f:
json.dump(nerf_data, f, indent=4)

TypeError: empty_fn() missing 1 required positional argument: 'context'

Hello, I had this problem while running the plugin. How can I solve it?
Traceback (most recent call last):
File "C:\Users\jUsers\AppData\Roaming\Blender Foundation\Blender\3.3\scripts\addons\BlenderNeRF-main\helper.py", line 116, in properties_desgraph_upd
can_properties_upd(scene)
TypeError: empty_fn() missing 1 required positional argument: 'context'

Call in a for loop

Thanks for your amazing tools! It really helps and is integrated into my current production pipeline.

Have you tried call bpy.ops.object.train_test_cameras() in a Python for loop? Seems that the detection of rendering process has some bug, which makes the for loop functioning bad.

For example, in this case I want to create datasets by setting different shape keys:

import bpy
import time
key_blocks = bpy.data.shape_keys['Key.001'].key_blocks

bs_names = list(key_blocks.keys())
scene = bpy.data.scenes['Scene']

print(bs_names)
for name in bs_names:
    if name != 'Basis':
        key_blocks[name].value = 0
for name in bs_names:
    scene.save_path = "---"
    print(name)
    if name == 'Basis':
        scene.ttc_dataset_name = 'bs_neutral'
        # bpy.ops.object.train_test_cameras()
        
    else:
        key_blocks[name].value = 1
        scene.ttc_dataset_name = f'bs_{name}'
        bpy.ops.object.train_test_cameras()

        key_blocks[name].value = 0

Only the last dataset in bs_names is created, others don't render. It's related to the post_render function in the helper.py and also the 'INVOKE_DEFAULT' argument of the call to bpy.ops.render.render.

The color of the object in the generated image is different.

Hello, I'm new to Blender....i used the object below to execute cos, but the image turns out as shown below. What did I do wrong
Screenshot from 2024-02-13 15-51-02

Below is the generated image.
0001
0002

Sorry for the many questions.
It seems like the cosine method retrieves images in random order, but can I modify the code to receive sequence data from the starting point

It seems like the SOF method behaves similarly to what I want, right? And does this feature include rendering? Can it be removed

thank your reply!

Inside-out sphere option:

The sphere option is great for turntabling an object, but obviously not great for rendering an environment. It would be good to add an option that points the cameras outward to render a room or other environment from somewhere in the center (basically, where you would intend a viewer to see it from).

Question regarding GT

I thanks for the nice plugin. works well for me. However, I wanted to use nerf just as a downstream task thus i wanted to create some more information for the output:

DLR-RM/BlenderProc#895

do you have an idea why it is not working, with your plugin it works fine, but when getting the transformation matrix from here it just produces weird things. even when i render the same images.

How to calculate transform_matrix?

First, thank you for your great work! I'm a beginner at NERF and I want to implement features like yours in Unity. How to calculate transform_matrix?

How to get the donut dataset?

Hi, This is a good project about nerf. However could you explain more about how to get the dataset used in colab? Thanks very much.

FileNotFoundError

I followed your instructions, but for some reason I'm not able to run the addon. This is the error I'm getting, running on latest python version 3.10.8.
Already gave Blender including the addon high rights. Same as for the destination folder.
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.