Giter Site home page Giter Site logo

3dssg's Introduction

I am a Ph.D. candidate at TUM, Germany. Research interests are 3D scene understanding and reconstruction.

  • 🔭 I’m currently working on
    • higher-level scene understanding
    • a novel scene representation

3dssg's People

Contributors

egeozsoy avatar orsveri avatar shunchengwu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

3dssg's Issues

BUG report in ./utils/util_label.py

Hi!

When running gen_data.py with label_type set to ScanNet20, I found a minor bug in L242-L243 of file `util_label.py':

if nyu40name in label_names:

which should be changed to

if nyu40name in label_names.values():
    id_scan20 = list(label_names.values()).index(nyu40name)+1

Some questions about relationships.txt

According to the code, I think the relationships.txt shouldn't contain 'none'. Is that right?
But the relationships.txt I download using preparation.sh contain the 'none' relation.
Should I delete none in relationships.txt after run gen_data_gt.py

question about number of relationships

Hello!The Table 1 of the article shows “Evaluation of the scene graph prediction task on 3RScan/3DSSG [57] with 160 objects and 26 predicate classes.The experiments were conducted on the complete 3D data.” But in the code, only 8 relationships are evaluated.

No module named pointnet.relpn

I tried to run the toy example, but there was an error because there is no relpn.py file in the pointnet folder. Am I missing something, could you advise me on how to get it to run?
Thanks a lot!

Questions on trace mode

Hi!
I'm a student in Beijing and I have several questions on trace mode.

  1. What does the mode trace mean and what does it do?
  2. When I try to run the command "python main.py --mode trace --config ./CVPR21/config_CVPR21.json", I meet an error as follows:
    File "./src/dataset_SGFN.py", line 143, in init
    with open(os.path.join(self.root[0],'args.json'), 'r') as f:
    FileNotFoundError: [Errno 2] No such file or directory: '/home/sc/research/PersistentSLAM/python/3DSSG/data/3RScan_ScanNet20_InSeg_Full_20210430/args.json'
    Strangely I cannot find the file "/sc" when I direct find files in the terminal, following the given path. Would you please tell me how to find where is wrong. (I followed the steps in README.txt of 3DSSG and I have downloaded and unzipped the pre-trained model "CVPR21")
    I would appreciate it if you could reply.

Using predicted Scene Graphs For Segmentation

Hi, firstly thanks for your great work!

I was trying out instance segmentation using the predicted scene graphs, aka, merging the segments with "same part" relationship. However, the predictions are like the following : triplets having been predicted with "same part" relationship do not belong to the same predicted semantic object class (which is not possible in reality and probably just bad predictions). I am not sure how to handle this and then generate instance maps from the over-segmented maps.

Could you please tell me how did handle segmentation while implementing the paper? It'd be very nice if you could share the code as well, if not, a short description will be amazing. Thank you very much.

Regards,
Sayan

Question about pre-trained model results

Thank you for providing the updated code and pre-trained model! But I have several questions about it.

  1. What's different between GT, DENSE, and SPARSE datasets? Could you please tell me where I can find the information about it?
  2. Are the results of README a value of recall@1?

Thank you!

about max

Hi, firstly thanks for your great work!
I want to ask how max is manifested in node updates?

2b8ff3fd4204bb88cb203c21841a8b0

3Rscan dataest download

hello, congratulate to your work! But i have some problems when i tried to donwload the 3RScan dataest .When i run the .py to donwload it, the progress bar didn't begin at all. Can you help me?

Curious about SGFN model learning.

Hello!
In the Scene Graph Fusion System (C++), the partial scene enters the input of the already learned model and then fuses the regional graph into a global graph on the system.
Question 1. It seems that the SGFN model in the 3DSSG provided does not enter the point cloud map for the partial scene, but enters the point cloud map for the entire scene, and then creates a local scene graph, right?

Question 2. Does the SGFN model in the 3DSSG provide not learn how to fuse, but to create a regional scene graph, and Scene Graph Fusion System (C++) creates a local scene graph when input into a partial scene, and fuses the result into a global scene graph through the algorithm proposed by the author?

I cannot access the files in "preparation.sh"...

I failed to run the wget commands in "preparation.sh". Besides, when I try to connect the links in "preparation.sh" directly, I received "404 Not Found" "nginx/1.18.0 (Ubuntu)" instead of the files.
Has anyone encountered the same problem?

Questions on generate validation data

Hi, I have a few questions about how to generate validation data with gt.

I've run "gen_data_gt.py" with three different types (train, test, validation), should I store the result in three folder.
for example:
python gen_data_gt.py --type train --pth_out ../data/3RScan_train
python gen_data_gt.py --type validation --pth_out ../data/3RScan_validation
python gen_data_gt.py --type test--pth_out ../data/3RScan_test
are they correct? If not, what should I do?

Size mismatched (20 v.s. 160)

The checkpoint you provided seems to be trained with 20 object classes. However, the data generated with the script contains 160 classes. I guess it's controlled by the files/classes160.txt. Can you provide another file that contains the 20 classes you use? Thanks.

Test on scannet datasets

Hello, I want to create a 3D scene graph by testing the 3DSSG model on different datasets such as Scannet, and performing the inference process. Do you plan to provide an inference code or a Demo version of that code?
For 3RScan datasets in this repository, I can download and use pre-processed datasets already. However, if I use other datasets, It seems that data such as "color_align.ply" and "~.annotated.v2.ply", etc should be created. Can you tell me how the rendered data was generated?

Thanks for the nice work!

Messagepassing in TripletGCN

Hi @ShunChengWu,

Thank you for the excellent implementation, great work! I was hoping you can clarify the message-passing scheme for the TripletGCN you have used.

In the 3DSSG paper and the SG2IM paper, the processed features of each point cloud object is averaged separately for when it appears as a subject in a triplet or as an object. But, in line 64 of network_TripletGCN.py, you directly add the processed features of the subject point cloud and object point cloud together, for each triplet, after the triplets have been processed by nn1(.). Then you scatter_add along the index during aggregation. This is a bit different to the SG2IM implementation here: https://github.com/google/sg2im/blob/2c1bf4a150f8a70c0977200a6719519213367b5c/sg2im/graph.py.

Could you please clarify if this meant to be different from the SG2IM implementation or did I miss something?

Thank you very much!
D.

problem about running "python main.py --mode trace --config ./CVPR21/config_CVPR21.json"

Dear author,

When trying to run Trace mode with the provided CVPR21 model files, an error pop up saying "
File "D:\SRC\3DSSG./src\dataset_SGFN.py", line 144, in init
with open(os.path.join(self.root[0],'args.json'), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/sc/research/PersistentSLAM/python/3DSSG/data/3RScan_ScanNet20_InSeg_Full_20210430/args.json".

By checking the config_CVPR21.json file, I found the path is actually the "dataset/root" variable in it. I wonder how to get the "args.json" file to run the code, and what files else should be present in the "3RScan_ScanNet20_InSeg_Full_20210430" folder .

Looking forward to your reply~ Many thanks!

Best regards,
Yifei Chen

Question about 3DSSG method

Thanks for your great work!
I have a question about your implementation of the method in 3DSSG.In Table 2,the results show that your method outperforms the method in 3DSSG.
1680348949445

How can I reproduce the result in Table 2,using your code?And how should I modify config_CVPR21.json to use the method in 3DSSG?
Looking forward to your reply!

question on node properties

Thanks for your work and time.
Does the resulting node have no properties? I noticed that the 3DSSG dataset has Static Properties, Dynamic Properties, and Affordances。

[1] SceneGraphFusion: Incremental 3D Scene Graph Prediction from RGB-D Sequences
[2] Learning 3D Semantic Scene Graphs from 3D Indoor Reconstructions

Data splits

Thanks for your great work! I want to reproduce the results of scene graph prediction using the gt segmentation which is shown in Table 1 in your paper, but since there is no official codebase for the original dataset 3DSSG, i can't access to the original data splits provided for this task, could you tell me how to generate them or where could i find it, thanks so much!

gen_data_gt.py

In the gen_data_gt.py, some settings, files and functions are missing, such as RELEASE_PATH, "/home/sc/research/PersistentSLAM/data/classes.txt", and util.getLabelMapping.

What is Classes 160 in preparation.sh

Hi ShunCheng,

I detailly checked the paper "learning 3D semantic Scene Graph From 3D indoor Reconstructions", and I am pretty sure 3dssg is with 534 objects classes, which is the global id annotated in json file. So I am really confused about the preparation work for classes 160, which is not presented in global id, also not in NYU40, Eigen13, Rio27 and Rio7 as shown in 3RScan.v2 Semantic Classes - Mapping.csv and original data from 3RScan.

Could you please have an annotation for Classes160.txt?

Cheers

Preprocessed data

Hi,

Could you please release the preprocessed data? It is a bit complicated to preprocess the data...

Thanks a lot.

plydata = util_ply.load_rgb(path)

When I use the SGPN dataset and SGPN model, I tried to use the rgb information, but a lot of IndexError happened in function load_rgb in util_ply file. I really want to know how can I fix this problem.

Split scene into subgraph

In the paper 3dssg, you split the scene into subgraph with 4-9 nodes, can you provide the script?

inseg.ply: how to get these files?

Hi!
When I'm trying to run main.py to train on 3RScan, I get this error:

ValueError: string is not a file: /3D/Datasets/1_3RScan/from_3DSSG/ab835faa-54c6-29a1-9b55-1a5217fcba19/inseg.ply

Apparently, inseg.ply file is needed, but I have only semseg.v2.json, labels.instances.annotated.v2.ply and labels.instances.align.annotated.v2.ply in the directory. What am I doing wrong?

(I downloaded 3RScan dataset, then used gen_data_gt.py script to prepare data. I assumed it is enough to run main.py and I don't need to do what is described in the next section, "Generate training/evaluation data from estimated segmentations". Am I right?)

Also as I understood, if I want to run your trained model, it's better to do it on the ScanNet dataset. Then how should I prepare the data? Use gen_data_scannet.py script?

Question about use ScanNet dataset in Run_GenSeg.py

Thanks for your great work!

I want to use ScanNet dataset to train this model.
When I follow the Run_GenSeg.py to generate InSeg.ply from ScanNet dataset. I found it needs a [intrinsics.txt](https://github.com/ShunChengWu/SceneGraphFusion/blob/5bf9017c00949aedca1430854240667b3fa06565/libDataLoader/src/dataloader_scannet.cpp#LL34C62-L34C62) .

But ScanNet dataset only provide scenexxx.txt, (like this in scene0709_00.txt)

axisAlignment = 0.945519 0.325568 0.000000 -5.384390 -0.325568 0.945519 0.000000 -2.871780 0.000000 0.000000 1.000000 -0.064350 0.000000 0.000000 0.000000 1.000000 colorHeight = 968 colorToDepthExtrinsics = 0.999973 0.006791 0.002776 -0.037886 -0.006767 0.999942 -0.008366 -0.003410 -0.002833 0.008347 0.999961 -0.021924 -0.000000 0.000000 -0.000000 1.000000 colorWidth = 1296 depthHeight = 480 depthWidth = 640 fx_color = 1170.187988 fx_depth = 571.623718 fy_color = 1170.187988 fy_depth = 571.623718 mx_color = 647.750000 mx_depth = 319.500000 my_color = 483.750000 my_depth = 239.500000 numColorFrames = 5578 numDepthFrames = 5578 numIMUmeasurements = 11834 sceneType = Apartment
I wonder how to generate intrinsics.txt, or could you tell me how to transfer from scenexxx.txt?

How to get the results of Table 1 in your paper?

hi! Excellent work!
But I have one question. It should have 26 different relationships in your table 1. After using gen_data_gt.py to generate the training data, i only get the realtionships.txt with 7 different relationships. [supported by, attached to, standing on, hanging on, connected to, part of, build in].

Question about trace mode when I run main.py

Hello, i am a student from xi.an.
When I run main.py on pre-trained config, I met these errors:
python main.py --mode trace --config ./CVPR21/config_CVPR21.json
Traceback (most recent call last): File "main.py", line 6, in <module> from src.SceneGraphFusionNetwork import SGFN File "/home/bobo/project/3DSSG/src/SceneGraphFusionNetwork.py", line 6, in <module> from model_SGFN import SGFNModel File "./src/model_SGFN.py", line 11, in <module> from network_PointNet import PointNetfeat,PointNetCls,PointNetRelCls,PointNetRelClsMulti File "./src/network_PointNet.py", line 17, in <module> import op_utils File "./src/op_utils.py", line 7, in <module> from torch_geometric.nn.conv import MessagePassing File "/home/bobo/anaconda3/envs/SceneGraphFusion/lib/python3.8/site-packages/torch_geometric/__init__.py", line 5, in <module> import torch_geometric.data File "/home/bobo/anaconda3/envs/SceneGraphFusion/lib/python3.8/site-packages/torch_geometric/data/__init__.py", line 1, in <module> from .data import Data File "/home/bobo/anaconda3/envs/SceneGraphFusion/lib/python3.8/site-packages/torch_geometric/data/data.py", line 8, in <module> from torch_sparse import coalesce, SparseTensor File "/home/bobo/anaconda3/envs/SceneGraphFusion/lib/python3.8/site-packages/torch_sparse/__init__.py", line 14, in <module> torch.ops.load_library(importlib.machinery.PathFinder().find_spec( AttributeError: 'NoneType' object has no attribute 'origin'
Would you please tell me how to find where is wrong. (I followed the steps in README.txt of 3DSSG and I have downloaded and unzipped the pre-trained model "CVPR21")
I would appreciate it if you could reply.

About Trace mode

Hello, thank you for the great work!

I tried to train a modified SGFN and run the model on incremental C++ framework.

I want to create files for the trace folder like traced/gcn_1_edgeatten_prop, traced/obj_pnetenc, ... etc.

How can I convert the trained model checkpoint (model_best.pt) to CVPR21_traced files format?

Can you tell me about the format or file type of the CVPR21_traced model?

Is there some code for traced mode?

scene graph generation using gt segmentaion

In 3DSSG/data_processing/gen_data_gt.py for scene graph generation using gt segmentaion
In line 299 target_relationships = ['supported by', 'attached to','standing on','hanging on','connected to','part of','build in']
According to your paper, it should have 26 different relationships, following the relationships.txt . It that correct?
In addition, is none included in target_relationships ?
Look forward to your reply.

Files for data preparation missing

Hi, thanks for your great work!

I am trying to run your code, but preparation.sh, references.txt and rescans.txt needed by data preparation are missing from the ./files directory.

Will you release these files?

Thanks!

Typo in Dataset Download

Hi! I found a small typo in the dataset download command. In your README.md,
python scripts/Run_prepare_dataset_3RScan.py --download --thread 8
should be replaced by
python scripts/RUN_prepare_dataset_3RScan.py --download --thread 8

Relationship evaluation and reported paper results

I have a question regarding your evaluation code for the relationship metric. More specifically, how you handle GT None edges/relationships.

I am not sure if this snippet in your code is entirely correct:
https://github.com/ShunChengWu/3DSSG/blob/master/utils/util_eva.py#L159-L174

if len(gt_r) == 0:
    # Ground truth is None
    indices = torch.where(sorted_conf_matrix < threshold)[0]
    if len(indices) == 0:
        index = maxk+1
    else:
        index = sorted(indices)[0].item()+1
    temp_topk.append(index)
for predicate in gt_r: # for the multi rel case
    gt_conf = conf_matrix[gt_s, gt_t, predicate]
    indices = torch.where(sorted_conf_matrix == gt_conf)[0]
    if len(indices) == 0:
        index = maxk+1
    else:
        index = sorted(indices)[0].item()+1
    temp_topk.append(index)

So if the GT edge/predicate is None which is equal to gt_r is empty, then you have a separate evaluation where you only check if your top predictions are below your threshold. However, this means you're not evaluating if the object nodes are correct. I guess you just evaluate that you are predicting no predicate.
However, this is not really in the spirit of the relationship metric, right? Maybe this produces better results than the method actually can provide? I think to evaluate the triplet correctly, you should still evaluate if the object nodes are predicted correctly.

Am I missing anything in your evaluation which justifies the evaluation procedure, or is this indeed slightly incorrect?

questions about point_dim

In the the scene graph prediction task with ground truth segmentation mask, only xyz coordinates of point cloud is utilized, i'm curious that is there some limitation of this problem setting that other features like rgb information and normal vectors are forbidden to use? Or just because they are not crucial for this problem? Thanks !

How to setup a bigger BATCH_SIZE ?

Thanks for your previous reply, I've successfully trained the model.

I checked the GPU status when trainning and found the memory usage is not high enough. It shows 2GB/12GB in my RTX 2080Ti. I want to speed up the process but I can't find a batch_size option in the config file. Is it proper to raise the values of all the batch_size in SceneGraphFusionNetwork.py?

Question about running eval mode

Hi!

I have a doubt about pipeline to run the model in eval mode.

I followed your instructions to generate ground truth data for training/evaluation. In ./data/gen_data_gt.py, you provide 3 choices of splits: ['train', 'test', 'validation']. I run the scripts for those 3 split types. The generated test_scans.txt relationships_test.json files are EMPTY, that I wonder if this is the expected behavior.

Besides, when I run main.py in eval mode, the argument 'split_type' is set to 'test_scans' which is empty and causes an exception.

self.dataset_eval = build_dataset(self.config,split_type='test_scans', shuffle_objs=False,

How can I fix this error? Do I need to generate test data in another way or should I change the 'split_type' to 'validation_scans'?

Question on generating evaluation data

Hi!
Sorry to bother you again, but I encountered another several questions during preparing evaluation data.

  1. Follow "Generate training/evaluation data with GT segmentations", I directly run "gen_data_gt.py" and got an error: the following arguments are required: --pth_out
    I add a path and try it again, another error is showed.
    Detailed information is showed as the following picture. Could you please have a look on it and show me how to sucessfully generate evaluation data with GT segmentations?
    2021-12-02 21-41-52 的屏幕截图

  2. I wonder whether it is enough to only run "gen_data_gt.py" if I merely wanna evaluate the given pre-trained network?

Looking forward to your reply.

Typo in command line

I think 'python script/Run_prepare_dataset_3RScan.py --download --thread 8' should be 'python scripts/RUN_prepare_dataset_3RScan.py --download --thread 8' in readme.md

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.