Giter Site home page Giter Site logo

3d-front-toolbox's Introduction

3D-FRONT-Toolbox

2022.4.21

  • rewrite json2obj.py to process the updated 3D-FRONT (partial open source interior finish with texture) and add new features.

Algorithm Description

  • Input: scene json

    • scene json file, *.json
  • Ouput: camera json

    • camera parameters(list) json file, *.json
  • Parse the information in the scene json, including layouts, floors, furnitures , and generate camera parameters by these information.

  • The camera parameters can be used to render scenes.

  • Bedroom:referred by bed

  • DiningRoom:referred by dining-table

  • LivingRoom:referred by main-sofa

How To Use

python run.py --input=scene.json --output=path/camera.json

Camera Description

  • pos: camera position
  • target: camera viewer
  • fov: viewer angles

Other

  • The camera parameters maybe are not that good in some cases, they are some kinds of coarse solutions. You can take them as initial solutions and adjust them.

3d-front-toolbox's People

Contributors

bwcai avatar jwzxgy2007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3d-front-toolbox's Issues

NameError when running json2obj.py

PS E:\Datasets\3D-FRONT-ToolBox\scripts> python json2obj.py
00004f89-9aa5-43c2-ae3c-129586be8aaa
Traceback (most recent call last):
File "json2obj.py", line 137, in
v = v.astype(np.float64) * scale
NameError: name 'v' is not defined
PS E:\Datasets\3D-FRONT-ToolBox\scripts>
image

Request for pretrained model for example.

I wanted to know if you by any chance could also upload the pretrained models for the scene synthesis example so that I don't spend time training a model and can test out the performance right away.

Thank you again for your hard work and filling the gap which SunCG produced.

missing vertex and face textures

Hi,

Thanks for the helpful scripts. I saved the obj files, but without vertex and face textures, i.e., without 'vt' and 'ftc'.
'vt' and 'ftc' variables were missing when the execution comes to Line 138 of run.py.
Removing these variables from Line 138 and modifying this function helps to save the obj files (without texture info), but is it the case, or am I missing something for textures?

Penetrating furnitures

While examining the dataset, I discovered that there can be significant overlaps for different furnitures:
image
This is scene: d991e862-ca6b-4dc6-b29b-d6b498e889a9/DiningRoom-12246.

Is there any plan to resolve overlapping furniture like this? Thanks.

Pink furniture in Blender.

I ran the json2obj.py script, then imported the files into Blender. It seems that textures are not loaded at all for furniture or walls. I use Windows 11.
Can anyone help? Thanks a lot.
Sorry for my English
image

Request regarding data for the Scene Synthesis example

Thank you for providing the dataset and this toolbox.
I wanted to know if you maybe could publish the script or steps how you produced the .7a files in the data/3d-front in detail as I would like to use the whole dataset and I believe there it is only a subset correct?

Thank you again for everything.

Question about dependency(mathutils,bpy).

Hi,

I'm rendering using scripts/render.py. There is an error on LINE 57

direction = target - loc

Errrors shown here:

Traceback (most recent call last):
  File "render.py", line 223, in <module>
    render_function(mesh_list,tex_list,cam_info, os.path.join(args.scene_path, scene))
  File "render.py", line 182, in render_function
    point_at(cam, (info['target'][0],3.2,info['target'][1]))
  File "render.py", line 59, in point_at
    direction = target - loc
AttributeError: Vector subtraction: (Vector - Vector) invalid type for this operation

Then, I debug render.py from LINE 57. By running type(target) and type(loc), the results are all Vector. However, by running isinstance(target, mathutils.Vector) and isinstance(loc, mathutils.Vector), the results are True and False respectively. i.e. The

I find a way to solve this:

if not isinstance(loc, mathutils.Vector):
        loc = mathutils.Vector(loc.to_tuple())

But this is somewhat unnature.

I think this caused by the inappropriate version of mathutils and bpy.
So could you please tell me the version of these two packages you are using?

Thanks a lot.

Rotation encoding

Hi, could you maybe tell me how the rotation of the objects is encoded in the 4 dimensional vector in the JSON-files.
I'm trying to read the files with C++.

In the json2obj scrpt it is used here:

rot = c['rot']
scale = c['scale']
v = v.astype(np.float64) * scale
ref = [0,0,1]
axis = np.cross(ref, rot[1:])
theta = np.arccos(np.dot(ref, rot[1:]))*2
if np.sum(axis) != 0 and not math.isnan(theta):
R = rotation_matrix(axis, theta)

I'm not sure if the following line is correct. It seems like it should check if the axis has zero length, but instead checks if the sum of all elements is zero. But I can't really tell if I don't know how the rotation is encoded.

if np.sum(axis) != 0 and not math.isnan(theta):

I also noticed that there are some hovering and otherwise misplaced objects in the obj files. I'm not sure if this is is related to this or if that's just an error in the dataset.

Thanks!

IES lighting files not provided

I am currently attempting to render the dataset by parsing the provided JSONs and rendering with blender cycles.

The provided scene JSON files contain parameters for lights to be placed in the scene. Some of these lights specify an 'ies_file' field, which contains a filename such as 'spot_29.ies', '4_home.ies', and 'fill_light_2.ies'. I understand these to be files that describe how the light from a particular source attenuates, so I would like to use them via this blender node in order to properly replicate the dataset.

However, these IES files are not provided anywhere. Please would you be able to provide the IES files I mentioned, any any others used for other scenes, so that I can make use of this dataset?

render the JSON file

Hello, thank you very much for the toolbox you provided. I want to ask a question:Whether this script can render the JSON file of the 3DFRONT data set into a black and white layout, white is the floor, and the other parts are black. I look forward to you very much.Help, thank you

No doors or windows in scene synthesis example

This may be a weird questions:
I was just rendering some of the examples from the scene synthesis example and noticed that non have doors or windows is that because due to simplicity? If that is not the case is there a way to get the windows and doors info from the zip files?

Error while trying to train models for the scene synthesis example

remote@pop-os:~/repos/autofurnish/external_experiments/scene_synthesis/deep_synth$ python continue_train.py --data-dir bedroom --save-dir bedroom --train-size 160 --use-count
Building model...
Converting to CUDA...
Building dataset...
Building data loader...
Building optimizer...
=========================== Epoch 0 ===========================
torch.Size([188, 47]) 54 132
torch.Size([49, 28]) 337 159
torch.Size([202, 202]) 148 344
torch.Size([49, 39]) 441 358
torch.Size([45, 46]) 111 118
torch.Size([4, 56]) 226 173
torch.Size([195, 181]) 175 159
torch.Size([40, 40]) 53 207
torch.Size([185, 202]) 73 152
torch.Size([172, 33]) 329 398
torch.Size([176, 57]) 211 81
torch.Size([2, 2]) 153 225
torch.Size([121, 54]) 77 157
torch.Size([45, 47]) 332 117
torch.Size([163, 182]) 397 358
torch.Size([39, 43]) 393 370
torch.Size([49, 28]) 160 163
torch.Size([183, 201]) 154 108
torch.Size([49, 132]) 434 163
Traceback (most recent call last):
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/continue_train.py", line 206, in
train()
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/continue_train.py", line 136, in train
torch.Size([35, 39]) 131 252
for batch_idx, (data, target, existing) in enumerate(train_loader):
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in next
torch.Size([51, 183]) 430 117
data = self._next_data()
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/.venv_furnish/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/continue_dataset.py", line 63, in getitem
composite.add_node(node)
File "/home/remote/repos/autofurnish/external_experiments/scene_synthesis/deep_synth/data/rendered.py", line 189, in add_node
to_add[xmin:xmin+xsize,ymin:ymin+ysize] = h
RuntimeError: The expanded size of the tensor (168) must match the existing size (202) at non-singleton dimension 1. Target sizes: [202, 168]. Tensor sizes: [202, 202]

Questions about texture

Thanks for your great work on this indoor scene dataset. I have downloaded the models and run scripts json2obj.py and I have some questions:

  1. I noticed that most obj outputs are without textures? Why?
  2. The outputs of json "17ff4014-6988-4b6e-9e3e-bf41b2cd9e05.json" have some textures, but floor and wall miss textures. How can I add textures?

Is there any supplementary files when parsing houses in Scene Synthesis?

Thank you for providing 3D_Front dataset and this toolbox, which fills the void in the SUNCG dataset in related work as Scene Synthesis.
But I met a question when applying PlanIT(PlanIT paper) using 3D_front which needs support information and architecture information of houses when parsing houses as you do in house.py (https://github.com/brownvc/planit/blob/b42bfda1f7cd7c0e07874728ae7b809c09a3e6b7/scene-synth/data/house.py#L74), so I wonder if there are files like '.stats.json' or '.arch.json' ?

Missing files from scene synthesis example

I believe that house.7z and room.7z seem to be missing or there is no link to it on the README.
Could you please confirm this, if this is on purpose or just a mix up in the link creation ?

Sharing some scene issues with the dataset

Dear 3D-FRONT staff,

Hi, this is Yoonyoung (Jamie) Cho, researcher at the KAIST Intelligent Mobile Manipulations Lab.
Again, thank you for sharing this dataset - the 3D-FRONT dataset will be an invaluable asset in our research.
Just for context: I have previously contacted the team via email, but I figured I should move the conversation here for public discussion.

I have noticed some issues throughout my interaction with the dataset. For public reference, I have validated that the checksum of my local copy matches that of the source. The following command describes the checksum at the time of writing (2021-03-03):

$ md5sum 3D-FUTURE-MODEL.zip
cf20ec68ce7005d5491d2f207a7730e6 3D-FUTURE-model.zip

$ md5sum 3D-FRONT.zip
d8d7521283365c56e5da3ce247fe8006 3D-FRONT.zip

Now that we have established that the errors in the dataset are not due to an error during the download process, I'd like to enumerate some issues that I have noticed over time.

  • Some models that contain the Shadow attribute are broken. An example can be found 3D-FUTURE-model/3D-FUTURE-model/6914bc9b-d7f3-3d85-9fdf-a8e30b54ed35/raw_model.obj; the scale is severely off, as it can be seen below. An example of a scene that contains this model is 0ec7ce97-e93d-4842-9c89-b947136bb393.json.
    snapshot01
    (In the above scene, the scale of the dining chair is severely off, and it's towering above the whole house.)
    Note that this is not an error that can be caught with the default json2obj.py which appears to silently ignore this issue. The real problem is that when loading this scene in programmatically through other interfaces such as pybullet, this scene will cause issues. My current guess for the root cause here is that, in this case, the .obj file is parsed as a scene - which possibly violates some hardcoded assumptions in the scene processing pipeline.

  • Some of the texture images appear to be broken. An example can be seen in 3D-FUTURE-model/6934dea0-1d66-49c4-82c6-4d54d41f9707/texture.png. (libpng error: Read Error) and failed to be loaded in tools such as meshlab. Here's what the texture looks like when I visualize it with eog:
    Screenshot from 2021-03-03 14-05-50

  • One of the models have an error: 3DFRONT/3D-FUTURE-model/3D-FUTURE-model/e73ff703-adb1-4d3a-993d-60f6d148bec4/raw_model.obj an example of a scene that includes this model is 3D-FRONT/99e87673-0512-41c8-b35e-fd1ea39d1f0c.json.

    • The offending line is (raw_model.obj#L109726): f 6970/. Removing this line fixes the issue but I'm not sure what the initial intent was.
  • Some of the scenes have an apparently empty mesh, such as 3DFRONT/3D-FRONT/6ce404bd-9c24-4a3b-869f-5217310e3741.json. The relevant snippet has been copied below (after formatting):

        {
            "aid": [],
            "jid": "",
            "uid": "49261587189179756/0",
            "xyz": [],
            "normal": [],
            "uv": [],
            "faces": [],
            "material": "53951587189179627/05395",
            "type": "WallOuter",
            "constructid": "4832",
            "instanceid": "4926"
        }
  • Not sure if this is intentional, but many of the scenes are missing models. This seems to be particularly the case for certain classes of objects (door/window). For instance, when I try to load an example scene 3DFRONT/3D-FRONT/f24f4eab-5da2-4308-9145-fad49fc3916b.json, the following shapes are not found:
8799/model
8805/model
12697/model
159369944712376608/model
8806/model
18408/model
20553/model
20553/model
20553/model
20553/model
20553/model
20553/model
20553/model
20553/model
20553/model
20553/model
21419/model
159369944712999808/model
21433/model
  • Finally, because all scenes are essentially missing all door-like objects, every floor has a gap where a door should have been placed. See the below picture for reference (shown without texture for clarity):
    snapshot02
    This essentially means that nearly every scene in 3DFRONT has a discontinuity in the scene floor.

I hope the enumeration of the above set of issues was helpful! Let me know if there is anything I can do to help. Again, I'd like to stress how much I appreciate that you have shared the dataset with us. I think the dataset really has strong potential, and after some of the above issues have been addressed the dataset would be a lot easier to use for some of us.

Thank you,

Yoonyoung (Jamie) Cho

The category id mapping to its label

Hi, I have another quick question:

Except for the bed, table and sofa ids provided in variables.py, do you have such mappings to the other furniture categories? I find except for the three categories, there are a lot of other ids without labels. Have you stored those mappings?

Best

How to get textures for floors, walls, etc?

Thanks so much for providing the json2obj scripts as they prove really useful. I noticed that scripts do not seem to texture the generated OBJs files. I was wondering how once can resolve the materialID in the JSON to get the corresponding texture data. I don't see any examples on how to do that in this toolbox or the FUTURE one.

Why Ns exponents do not exist in mtl file?

Hi! I'm using simple_3dviz to read data from
3D-FUTURE-model/c2fecd9b-c61e-423a-a48d-08c63931cd1f/model.mtl
simple_3dviz assumed that model.mtl contains the specular exponent, namely a line starting with "Ns". But in fact, I do not find that. How should I resolve this gap?

The content of model.mtl:

newmtl solid_001_wire

d 1
Tr 0
Tf 1.000000 1.000000 1.000000
illum 2
Ka 0.000000 0.000000 0.000000
Kd 0.000000 0.000000 0.000000
Ks 0.313725 0.313725 0.313725
Ke 0.000000 0.000000 0.000000

map_Ka ./texture.png
map_Kd ./texture.png


newmtl solid_002_wire

d 1
Tr 0
Tf 1.000000 1.000000 1.000000
illum 2
Ka 0.196078 0.196078 0.196078
Kd 0.196078 0.196078 0.196078
Ks 0.705882 0.705882 0.705882
Ke 0.000000 0.000000 0.000000

map_Ka ./texture.png
map_Kd ./texture.png


newmtl solid_003_wire

d 1
Tr 0
Tf 1.000000 1.000000 1.000000
illum 2
Ka 0.500000 0.500000 0.500000
Kd 0.500000 0.500000 0.500000
Ks 0.117647 0.117647 0.117647
Ke 0.000000 0.000000 0.000000

map_Ka ./texture.png
map_Kd ./texture.png


newmtl solid_004_wire

d 1
Tr 0
Tf 1.000000 1.000000 1.000000
illum 2
Ka 0.500000 0.500000 0.500000
Kd 0.500000 0.500000 0.500000
Ks 0.098039 0.098039 0.098039
Ke 0.000000 0.000000 0.000000

map_Ka ./texture.png
map_Kd ./texture.png


newmtl solid_005_wire

d 1
Tr 0
Tf 1.000000 1.000000 1.000000
illum 2
Ka 0.500000 0.500000 0.500000
Kd 0.500000 0.500000 0.500000
Ks 0.137255 0.137255 0.137255
Ke 0.000000 0.000000 0.000000

map_Ka ./texture.png
map_Kd ./texture.png

Any code to generate floorplans with furnitures

Hi,

Thank you for the great work! I have a question on the floorplan generation with Front3D. In your toolbox, is there any code we can use to generate 2D floorplan (room layouts) with furniture layouts(bed, cabinet, table, etc)?

Look forward to your reply.

Best

Do you have existing image renderings of scenes/rooms?

Hi,

I don't think I can synthesize image renderings easily with currently released data. Do you have existing images corresponding to each scene and/or room that we can download? I know 3D-FUTURE has images but they have filenames like 0000001.jpg that I cannot map to scene IDs.

Thanks,
Zhenyu

Issue with some scenes

Hello,

First of all thank you very much for your very nice work!

I have been trying to use your dataset and while some rooms/scenes seem to be ok. Some others are problematic. For example the SecondBedroom-34024 for the scene with json file 1f5862f8-119a-455e-880d-f9bdb56a88e2 is problematic, as the chair is positioned inside the bed frame (please see the attached images).

1f5862f8-119a-455e-880d-f9bdb56a88e2
1f5862f8-119a-455e-880d-f9bdb56a88e2_2
Please note that I used the provided
json2obj.py script for generating the scenes.

Do you any idea what is the issue? Thanks a lot very much for your help.

Best,
Despoina

about 'door' in the json file

Hi, I find there are no doors in several examples meshes generated from the 'json2obj.py' script. Is that true for all the meshes?
In the JSON file, there are meshes whose type is 'door', is this just an empty door?
Besides, in the 'extension' field, there is also information about door. What does this information for?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.