maximeraafat / blendernerf Goto Github PK
View Code? Open in Web Editor NEWEasy NeRF synthetic dataset creation within Blender
License: MIT License
Easy NeRF synthetic dataset creation within Blender
License: MIT License
I only get a training image diractory, a train json file and a test json file, without a test image directory.
Hi, thanks for your work:)
Hi, I tried the "PLAT COS"method, it looks fine, different views are great, but the images seem weired, they didn't capture the whole model like this:
I changed the camera settings, but still don't get the right images, could you give me some advices to get fine outputs?
And I wonder if it is possible ,could you kindly make a video on how to use this tool
Thank you for the wonderful work!
Now since the addon only works inside Blender, I have to manually open a 3D model, set parameters, and run the script. May I know whether I can automate this process by executing a Python script?
Thank you so much again!
In order to capture an oddly-shaped object (or an environment) it might be easiest to generate a "bounding volume" object that is basically a scaled-up (or down) low-detail version of the object/scene, and scatter camera positions on its surface (pointing into the center or out along its surface normals). This would provide a more flexible solution for scenes that either don't have an animated camera or don't conform neatly to the center of a sphere.
Hi, thanks for this great add-on
I wonder can the trajectory for this add-on, as "BlenderNeRF sphere", can be set as other shape?
The current default setting is a sphere, and can changed to a circle or a line, but can the trajectory be user-defined? Current sphere traj is suitable for object-level, but seems not a good choice when handling forward-facing scenes.
I'm testing the add-on with Gaussian splatting It seems like the scene is shifting?
Are there any adjustments made on export? can we turn them off?
if the aabb volume is set to 1, is that a cube of 1x1x1? why is the default 4? It would be good to have a visual representation of the aabb box to understand what it is.
Hello~
I hope this message finds you well. I am writing to express my sincere appreciation for your remarkable work. Your project has been instrumental in my work, and I have found it incredibly helpful.
Given the immense value your project has already provided, I would like to kindly request the addition of a new feature. It would be immensely beneficial if you could incorporate the functionality to 【 render depth maps simultaneously with optical images and integrate them into the JSON file format 】, as required by projects such as DS-NeRF.
Thank you for your time and dedication to your project.
Hello,
Thanks for writing this tool! It is very helpful. I generated a synthetic dataset using COS but when importing it into instant-ngp I find that the camera poses are translated much further from the unit cube. When using the same images and running COLMAP, the poses are correct. Is there any condition on the scene (centered at 0, must be scaled to be within 1x1x1 cube in Blender) that I am missing?
Thanks!
Thanks for the tool which really helps a lot!
I wonder is there anyway to generate test rendering images together with the camera pose? I found that only testing camera information is generated, which makes me unable to calculate the metrics like PSNR when rendering on test camera pose. Is there a way that I can get these testing images just like the training images?
Thanks!
Hi, thank you for this meaningful tool.
I tried to render .blender models with BlenderNerf and it works well. However, when I made some modifications on the model (e.g. rotation of a part of it), I encounter errors:
sphere_camera = scene.objects[CAMERA_NAME]
KeyError: 'bpy_prop_collection[key]: key 'BlenderNeRF Camera' not found'
May you please look into it? Thank you in advance.
Hi, thanks for this great add-on.
I wonder if I could set fixed mutiple cameras to render a dynamic scene using the Camera on Sphere method?
Hello,
Thank you very much for this wonderful tool!
I would like to generate a blender dataset in COS from an object. Unfortunately my object is large, and the radius parameter is limited to 1000m maximum. How can I change the maximum value of this radius?
Hi,
I wanted to use this plugin to generate training and test data for our Gaussian splatting implementation.
And it generates training data, but no test data. looking into the code, I saw in cos_operator, that it appears to only write the JSON file. is that possible at all?
thanks, adam
This might be valuable for determining where each camera should be to assure maximum quality and coverage:
Hi, I found that the release version's name will cause ModuleNotFoundError: No module named 'BlenderNeRF-2'
, to fix this I have to manually change the folder name to BlenderNeRF
.
Is there any way to fix this problem before release? It's hard for a blender noob like me to figure out what happened.
Hi, maybe I'm missing something. Can I feed images for them to be trained or should I come with an already trained set? In both cases I was not very clear how to begin with this.
Thanks!
"Hello,
I'm curious if this tool can be used to generate 3D Gaussian splatting datasets.
Thank you!"
Hi , firat i want to thank for your work on this one , when i open the data set the camera looks different than the original camera position in blender , i dont know if it is a zoom or a scale issue ? do you have any suggestion? thanks in advance
graphdeco-inria/gaussian-splatting#719 (comment)
Hello, I have been working on 3D reconstruction using 3DGS recently. 3DGS can also be used to reconstruct with nerf datasets. However, I have been facing continuous reconstruction failures when using a public dataset SPEED+. Therefore, I would like to consult with you about the correctness of my camera pose transformation. I have encountered issues with the transformed SPEED+ dataset when training with nerf as well.
import json
import math
import numpy as np
with open('camera.json', 'r') as f:
camera_params = json.load(f)
with open(r'G:\speed+\shirtv1\roe2\roe2.json', 'r') as f:
extrinsic_params = json.load(f)
camera_matrix = np.array(camera_params['cameraMatrix'])
dist_coeffs = np.array(camera_params['distCoeffs'])
Nu = camera_params["Nu"]
fx = camera_params["fx"]
camera_angle_x = 2 * math.atan(Nu / (2 * fx))
frames = []
def quaternion_to_rotation_matrix(q):
q = np.array(q)
r = np.array([[1 - 2 * (q[2] ** 2 + q[3] ** 2), 2 * (q[1] * q[2] - q[0] * q[3]), 2 * (q[1] * q[3] + q[0] * q[2])],
[2 * (q[1] * q[2] + q[0] * q[3]), 1 - 2 * (q[1] ** 2 + q[3] ** 2), 2 * (q[2] * q[3] - q[0] * q[1])],
[2 * (q[1] * q[3] - q[0] * q[2]), 2 * (q[2] * q[3] + q[0] * q[1]), 1 - 2 * (q[1] ** 2 + q[2] ** 2)]])
return r
for param in extrinsic_params:
filename = param['filename']
q_vbs2tango_true = param['q_vbs2tango_true']
r_Vo2To_vbs_true = param['r_Vo2To_vbs_true']
# Create a rotation matrix
rotation_matrix = np.eye(4)
rotation_matrix[:3, :3] = quaternion_to_rotation_matrix(q_vbs2tango_true)
# Construct a translation vector
translation_vector = np.array(r_Vo2To_vbs_true).reshape(3, 1)
# Build a transformation matrix
transform_matrix = np.eye(4)
transform_matrix[:3, :3] = rotation_matrix[:3, :3]
transform_matrix[:3, 3] = translation_vector.squeeze()
frame = {
'file_path': filename,
"rotation": camera_angle_x,
'transform_matrix': transform_matrix.tolist()
}
frames.append(frame)
nerf_data = {
'camera_angle_x': float(camera_params['ppx']),
"rotation": frame["rotation"],
'frames': frames
}
#Write the data to a file
with open(r'G:\speed+\shirtv1\roe2\lightbox\roe1_to_nerf.json', 'w') as f:
json.dump(nerf_data, f, indent=4)
Hello, I had this problem while running the plugin. How can I solve it?
Traceback (most recent call last):
File "C:\Users\jUsers\AppData\Roaming\Blender Foundation\Blender\3.3\scripts\addons\BlenderNeRF-main\helper.py", line 116, in properties_desgraph_upd
can_properties_upd(scene)
TypeError: empty_fn() missing 1 required positional argument: 'context'
Thanks for your amazing tools! It really helps and is integrated into my current production pipeline.
Have you tried call bpy.ops.object.train_test_cameras()
in a Python for loop? Seems that the detection of rendering process has some bug, which makes the for loop functioning bad.
For example, in this case I want to create datasets by setting different shape keys:
import bpy
import time
key_blocks = bpy.data.shape_keys['Key.001'].key_blocks
bs_names = list(key_blocks.keys())
scene = bpy.data.scenes['Scene']
print(bs_names)
for name in bs_names:
if name != 'Basis':
key_blocks[name].value = 0
for name in bs_names:
scene.save_path = "---"
print(name)
if name == 'Basis':
scene.ttc_dataset_name = 'bs_neutral'
# bpy.ops.object.train_test_cameras()
else:
key_blocks[name].value = 1
scene.ttc_dataset_name = f'bs_{name}'
bpy.ops.object.train_test_cameras()
key_blocks[name].value = 0
Only the last dataset in bs_names
is created, others don't render. It's related to the post_render
function in the helper.py
and also the 'INVOKE_DEFAULT' argument of the call to bpy.ops.render.render
.
Hello, I'm new to Blender....i used the object below to execute cos, but the image turns out as shown below. What did I do wrong
Sorry for the many questions.
It seems like the cosine method retrieves images in random order, but can I modify the code to receive sequence data from the starting point
It seems like the SOF method behaves similarly to what I want, right? And does this feature include rendering? Can it be removed
thank your reply!
The sphere option is great for turntabling an object, but obviously not great for rendering an environment. It would be good to add an option that points the cameras outward to render a room or other environment from somewhere in the center (basically, where you would intend a viewer to see it from).
In blender, will the test efficiency be higher than ngp?
I thanks for the nice plugin. works well for me. However, I wanted to use nerf just as a downstream task thus i wanted to create some more information for the output:
do you have an idea why it is not working, with your plugin it works fine, but when getting the transformation matrix from here it just produces weird things. even when i render the same images.
First, thank you for your great work! I'm a beginner at NERF and I want to implement features like yours in Unity. How to calculate transform_matrix?
Hi, This is a good project about nerf. However could you explain more about how to get the dataset used in colab? Thanks very much.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.