Giter Site home page Giter Site logo

partnet_dataset's Introduction

PartNet: A Large-scale Benchmark for Fine-grained and Hierarchical Part-level 3D Object Understanding

Dataset Overview

Figure 1. The PartNet Dataset Example Visualization.

Introduction

We present PartNet: a consistent, large-scale dataset of 3D objects annotated with fine-grained, instance-level, and hierarchical 3D part information. Our dataset consists of 573,585 part instances over 26,671 3D models covering 24 object categories. This dataset enables and serves as a catalyst for many tasks such as shape analysis, dynamic 3D scene modeling and simulation, affordance analysis, and others. Using our dataset, we establish three benchmarking tasks for evaluating 3D part recognition: fine-grained semantic segmentation, hierarchical semantic segmentation, and instance segmentation. We benchmark four state-of-the-art 3D deep learning algorithms for fine-grained semantic segmentation and three baseline methods for hierarchical semantic segmentation. We also propose a novel method for part instance segmentation and demonstrate its superior performance over existing methods.

About the paper

PartNet is accepted to CVPR 2019. See you at Long Beach, LA.

Our team: Kaichun Mo, Shilin Zhu, Angel X. Chang, Li Yi, Subarna Tripathi, Leonidas J. Guibas and Hao Su from Stanford, UCSD, SFU and Intel AI Lab.

Arxiv Version: https://arxiv.org/abs/1812.02713

Project Page: https://partnet.cs.stanford.edu/

Video: https://youtu.be/7pEuoxmb-MI

About the Dataset

PartNet is part of the ShapeNet efforts and we provide the PartNet data downloading instructions on the ShapeNet official webpage. You need to become a registered user in order to download the data. Please fill in this form if you have any feedback to us for improving PartNet.

Data visualization

We make visualization pages for the PartNet data. For the raw annotation (before-merging) one, use https://partnet.cs.stanford.edu/visu_htmls/42/tree_hier.html. For the final data (after-merging) one, use https://partnet.cs.stanford.edu/visu_htmls/42/tree_hier_after_merging.html. Replace 42 with any annotation id for your model.

Errata

We have tried our best to design the annotation interface, instruct the annotators on providing high-quality annotations and get cross validations among different workers. We also conducted two-round of data verifications to date to elliminate obvious data annotation errors. However, provided that annotating such large-scale fine-grained part segmentation is challenging, there could still be some annotation errors in PartNet. We believe that the error rate should be below 1% counted in parts through a rough examination.

Dear PartNet users, we need your help on improving the quality of PartNet while you use it. If you find any problematic annotation, please let us know by filling in this errata for PartNet v0 release. We will fix the errors in the next PartNet release. Thank you!

Annotation System (3D Web-based GUI)

We release our Annotation Interface in this repo.

PartNet Experiments

Please refer to this repo for the segmentation experiments (Section 5) in the paper.

TODOs

  • We will host online PartNet challenges on 3D shape fine-grained semantic segmentation, hierarchical semantic segmentation and fine-grained instance segmentation tasks. Stay tuned.
  • More annotations are coming. Please fill in this form to tell us what annotations you want us to add in PartNet.
  • We are integrating PartNet visualization as part of ShapeNet visualization.

Citations

@InProceedings{Mo_2019_CVPR,
    author = {Mo, Kaichun and Zhu, Shilin and Chang, Angel X. and Yi, Li and Tripathi, Subarna and Guibas, Leonidas J. and Su, Hao},
    title = {{PartNet}: A Large-Scale Benchmark for Fine-Grained and Hierarchical Part-Level {3D} Object Understanding},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2019}
}

Please also cite ShapeNet if you use ShapeNet models.

@article{chang2015shapenet,
    title={Shapenet: An information-rich 3d model repository},
    author={Chang, Angel X and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and others},
    journal={arXiv preprint arXiv:1512.03012},
    year={2015}
}

About this repository

This repository provides the meta-files for PartNet release v0.

    stats/
        all_valid_anno_info.txt         # Store all valid PartNet Annotation meta-information
                                        # <anno_id, version_id, category, shapenet_model_id, annotator_id>
        before_merging_label_ids/       # Store all expert-defined part semantics before merging
            Chair.txt
            ...
        merging_hierarchy_mapping/      # Store all merging criterion
            Chair.txt
            ...
        after_merging_label_ids/        # Store the part semantics after merging
            Chair.txt                   # all part semantics
            Chair-hier.txt              # all part semantics that are selected for Sec 5.2 experiments
            Chair-level-1.txt           # all part semantics that are selected for Sec 5.1 and 5.3 experiments for chair level-1
            Chair-level-2.txt           # all part semantics that are selected for Sec 5.1 and 5.3 experiments for chair level-2
            Chair-level-3.txt           # all part semantics that are selected for Sec 5.1 and 5.3 experiments for chair level-3
            ...
        train_val_test_split/           # An attemptive train/val/test splits (may be changed for official v1 release and PartNet challenges)
            Chair.train.json
            Chair.val.json
            Chair.test.json
    scripts/
        merge_result_json.py                    # Merge `result.json` (raw annotation) to `result_merging.json` (after semantic clean-up)
                                                # This file will generate a `result_merging.json` in `../data/[anno_id]/` directory
        gen_h5_ins_seg_after_merging.py         # An example usage python script to load PartNet data, check the file for more information
        geometry_utils.py                       # Some useful helper functions for geometry processing
    data/                                       # Download PartNet data from Google Drive and unzip them here
        42/
            result.json                     # A JSON file storing the part hierarchical trees from raw user annotation
            result_after_merging.json       # A JSON file storing the part hierarchical trees after semantics merging (the final data)
            meta.json                       # A JSON file storing all the related meta-information
            objs/                           # A folder containing several part obj files indexed by `result.json`
                                            # Note that the parts here are not the final parts. Each individual obj may not make sense.
                                            # Please refer to `result.json` and read each part's obj files. Maybe many obj files make up one part.
                original-*.obj              # Indicate this is an exact part mesh from the original ShapeNet model
                new-*.obj                   # Indicate this is a smoothed and cut-out part mesh in PartNet annotation cutting procedure
            tree_hier.html                  # A simple HTML visualzation for the hierarchical annotation (before merging)
            part_renders/                   # A folder with rendered images supporting `tree_hier.html` visualization
            tree_hier_after_merging.html    # A simple HTML visualzation for the hierarchical annotation (after merging)
            part_renders_after_merging/     # A folder with rendered images supporting `tree_hier_after_merging.html` visualization
            point_sample/                   # We sample 10,000 points for point cloud learning
                pts-10000.txt                               # point cloud directly sampled from the combination of part meshes under `objs/`
                label-10000.txt                             # the labels are the id in `result.json`
                sample-points-all-pts-nor-rgba-10000.txt    # point cloud directly sampled from the whole ShapeNet model with labels transferred from `label-10000.txt`
                sample-points-all-label-10000.txt           # labels propagated to `sample-points-all-pts-nor-rgba-10000.txt`

Questions

Please post issues for questions and more helps on this Github repo page. For data annotation error, please fill in this errata.

License

MIT Licence

Updates

  • [March 29, 2019] Data v0 with updated data format (result.json and result_after_merging.json) released. Please re-download the data.
  • [March 12, 2019] Data v0 released.

partnet_dataset's People

Contributors

daerduocarey avatar dustinpro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

partnet_dataset's Issues

24 categories - no. of objects

Hi,

[1] How many number of 3D objects does each of 24 classes have?

[2] Can the visualization of ground truth of segmented objects be provided for each class as given in ShapeNet?

Part Renderer

It is noticed that each object part in PartNet dataset is well rendered as shown in the corresponding annoID/parts_render/ folder. The rendering effects are very impressive. Could you please provide codes used for those part rendering?

Any plan to release the script used for generating the point cloud data?

Hi,

Thanks for the great work and the dataset. I am trying to understand how the dataset is constructed. I noticed there are point cloud data, which looks like a 10k-point data sampled from the original model from ShapeNet. I found some util functions in the repo but I cannot find a place where those functions are used.

In this case, do you have any plan to release the script to generate the point cloud?

Best

Mesh labels?

Thanks for the dataset with rich annotations! Do you provide part label for meshes for Section 5.1 and 5.3 (or just point cloud labels)?

Subparts?

Is there an easy way to count number of subparts in PartNet?

partnet objects with mujoco sim

Hello,

Is it there an example of loading partnet objects in mujoco simulator? Or do you advise to use pybullet sim?

Many thanks for your help.

Unable to download 10gb chunk

Hi,
I filled the google form and wish to download the 10gb chunk of the PartNet dataset, but it seems there is no option to do so.
Please advise.

miss label for obj file

Thank you for your work!

I find that some objs are not labeled in "result_after_merging.json". For example anno_id=2297, there is no label for original-80

image

There are some other miss objs. Which label should I set ?

Online benchmark

Thanks for your work. This a really excellent dataset. But I noticed that only train and eval split are provided for semantic and instance segmentation task. When will you release online benchmark?

24 Categories

The data_v0 repository consists of folders (eg: 42) with off, pts, json, etc. But, there is no cla file provided which specifies among these folders which belong to each of the 24 categories.
It would be great if provided.

ShapeNetV1 with PartNet misalignment

Hi,
I used the matrices provided HERE to align between shapenet v1 and partnet. First I added addition columns in points of partnet (Nx3 -> Nx4); then I did matrices multiplication like ''pts_homo = pts_homo @ (np.linalg.inv(trans_matrix)).transpose()'' to try to align it with shapenet. But after I visulized transformed point cloud and corresponding mesh from shapenet (by python library open3d), they are not aligned actually.
Screenshot from 2023-05-30 09-23-16
Would be very appreciated if you could enlight me a bit about the issue. Thanks in advance.

points sample from which mesh

  1. the points sampled from which mesh? objs or normalized obj? seems in some anno-ids there exist no normalized-objs
  2. you metion that "pts-10000.txt" is " # point cloud directly sampled from the combination of part meshes under objs/ "
    how do you define combination, merge duplicate vertices?
  3. you mention that "sample-points-all-pts-nor-rgba-10000.txt" is "# point cloud directly sampled from the whole ShapeNet model with labels transferred from label-10000.txt" how do you transfer? any scripts

Question on hierarchy on class-wise level

Hi there,

By checking the paper and dataset, it seems that currently the hierarchy is coming from part level. Does that hierarchy hold for class-wise level? For example, in the class of lamp, we know sub-class includes street lamp, ceiling lamp, etc., which is illustrated in Figure 3 in main paper. How could we obtain this hierarchy regarding class level in the dataset?

Thanks and looking forward to your reply!
-Yiru

Same 3D model as ShapeNet or not?

HI,
Are the 3D models provide 2D RGB images rendered from different angles or we can render RGB image by ourselves? Are the 3D model in partnet as same as 3D model in the Shapenet? If yes, do those models have the same number? For example, a chair 3D model's number in PartNet is "xxxxxx", is this 3d model also in the ShapeNet with the same number "xxxxxx"?

Are the models normalized?

For each object, all parts can be composed into a complete mesh. Are the meshes are normalized into unit sphere or 3d bounding box?

data generation script for semantic segmentation

Do you have semantic segmentation data generation script which generates data in folder sem_seg_h5 (similar to gen_h5_ins_seg_after_merging.py script)?

I am currently reproducing this script and it would be nice to have ground truth.

Alignment between ShapeNetCore v2 and PartNet point cloud

After downloading both the ShapeNetCore v2 and the PartNet dataset, I am having trouble figuring out how to align the ShapeNet meshes with the PartNet point cloud data. I used PyBullet to load the mesh and the point cloud. To obtain the picture below, I first found the correspondance between annotation id and ShapeNet id. I then loaded the mesh under the "/ShapeNetCore.v2/<category_id>/<shapenet_id>/models/model_normalized.obj" and then loaded the "/data_v0/<anno_id>/point_sample/sample-points-all-pts-label-10000.ply". To produce the picture below, the <categori_id> is 03797390, the <shapenet_id> is the 7223820f07fd6b55e453535335057818, and the <anno_id> is 8745. The exact script to reproduce the picture is here. As seen in the picture below:

  1. The scale of the point cloud is much larger.
  2. The handle of the point cloud is on the top, whereas the handle of the mesh is on the bottom.
  3. The origin of the point cloud is at the centre of the mug, whereas the origin of the mesh is towards the rim of the mug.

I am sure that it might be something I failed to do on my part, but I am struggling to figure out why they don't align.
Screenshot from 2022-02-15 20-37-58

Real-world Dimensions

Since there are real-world dimensions, estimates of their material composition, volume, and weight info in the ShapeNet dataset, I wonder if you have ground truth real-world dimensions for each object and other useful meta data in PartNet currently.

Error in PartNet data-v0 file structure?

Thank you for open-sourcing such a great dataset, but in my download of PartNet's data-v0 dataset why does it look like this, as shown here
image
It's not the file structure as shown, did I download it wrong?
image

Point cloud RGB and ground truth sometimes can be misleading

I find most of the RGB information come with the point cloud is unusable. I understand it's a issue with point sampling from mesh. Another related problem is the ground truth labeling - when two mesh surfaces are very close to each other and belong to different class, the sampled points are almost on the same surface. This kind of labeled data would be harmful to train the network I suppose? Could such data be taken out for the future release?

Some examples are shown below, like the inside and outside of a lamp base, and the bed sheet and bed board. The left model is shown with RGB info and right model is shown with the ground truth label.

ScreenCapture_2019-05-31-16-20-48

ScreenCapture_2019-05-31-16-28-26

About the `ins_seg_h5.zip` and `sem_seg_h5.zip` data provided by the PartNet

Hi @daerduoCarey

I have download the ins_seg_h5.zip and the sem_seg_h5.zip data.

For ins_seg_h5.zip, it contains: ins_seg_h5, ins_seg_h5_for_detection, ins_seg_h5_for_sgpn and ins_seg_h5_gt. What are the differences between them, and if I want to setup my research on instance segmentation task, which data-file should I adopt?

For sem_seg_h5.zip, it only contains sem_seg_h5, I wonder which tasks about the semantic segmentation introduced in your paper ( fine-grained semantic segmentation or hierarchical semantic segmentation) does this data-file related to?

Waiting for your reply!
Best regards!

It is hard to download the data

The size of the data is too big, which is 95g. When the network is broken, it is difficult to download the complete data. Can you make the size smaller, please?

Substituting handle collision STL with V-HACD generated convex STLs

I am currently trying to create new cabinet and drawer model similar to the Partnet ones and facing a problem of substituting the collision mesh of drawer handles with V-HACD generated ones using Blender. I have to manually accomplish the job and I am wondering is there any automation method for that?

Use Partnet dataset in Structurenet repo

Hello, hope you are doing well.
My question is, how can I convert the input structure of partnet dataset of any 3-D Object into format which you have used in the github: https://github.com/daerduoCarey/structurenet for training purpose.
like this >
and dataset in .json format
--train_dataset 'train_no_other_less_than_10_parts.txt'
--val_dataset 'val_no_other_less_than_10_parts.txt' \

Reassembling of parts’ models (.obj files)

Hi there, I would like to:

  1. Reassemble the .obj files of the parts into their full object automatically
  2. Pass the full object into Unity, while maintaining the hierarchical structure of its parts

Do you have any code that can help with any part of this process? Thank you.

Problem logging in

Hello and thank you for your work!

I am trying to log in to the ShapeNet website to download the dataset (I already have my account approved), but I am getting the following error message:

Exception: MongoError: topology was destroyed

I have seen somebody mentioning this message in #18. Could you, please, try to fix it this again?

Best regards!

Problem downloading the dataset

System:

OS version: [Ubuntu 20.04]
Python version : [Python 3.8]
SAPIEN version (pip freeze | grep sapien):
Environment: [Desktop]
Describe the bug
I am getting the following error when downloading the dataset.
NameError Traceback (most recent call last)
in
----> 1 urdf = download_partnet_mobility(sapien_assets_id, token=my_token)
NameError: name 'download_partnet_mobility' is not defined

To Reproduce
Steps to reproduce the behavior (use pastebin for code):

Pip install sapien and then follow the download steps on the Sapein website
import sapien
my_token = "my token in quotes"
sapien_assets_id = 179
urdf = download_partnet_mobility(sapien_assets_id, token=my_token)
Expected behavior
Successful download of the partnet dataset

Additional context
I have successfully downloaded sapien using pip command into my virtual environment in Python. At the moment, my virtual environment has python = 3.8, sapien, requests modules.

Problem downloading the PartNet dataset

I am facing problem downloading the dataset from the shapenet, not able to request the data as the google form is no longer accepting any responses. Can you please look into this, i already mailed at [email protected] ,but didn't get response.

Thanks

Problem about downloading the dataset

Hi,

Thank you for your great work and sharing your code and data!

I was not able to download the PartNet from shapenet. It seems that the website is down, so I was not able to register in. I would like to ask if there is another way to download this dataset. Thanks.

Best,

Points belonging to char/foot_base/foot missing

Hi!
I just noticed that for a particular object {'model_id': '225ef5d1a73d0e24febad4f49b26ec52', 'anno_id': '2458', 'ins_seg': [{'part_name': 'chair', 'leaf_id_list': [19, 17, 18, 15, 14, 13, 12, 7, 8, 9, 10]} taken from Chair-level-2-train_04, the label annotations does not have a single point having label=8. Hence, there's no way to get the points belonging to this particular part.

np.unique(label[i])
array([ 7,  9, 10, 12, 13, 14, 15, 17, 18, 19], dtype=int32)

Here's the full tree:

[{'part_name': 'chair', 'leaf_id_list': [19, 17, 18, 15, 14, 13, 12, 7, 8, 9, 10]}, {'part_name': 'chair/chair_back', 'leaf_id_list': [19, 17, 18]}, {'part_name': 'chair/chair_back/back_surface', 'leaf_id_list': [19]}, {'part_name': 'chair/chair_back/back_surface/back_single_surface', 'leaf_id_list': [19]}, {'part_name': 'chair/chair_back/back_support', 'leaf_id_list': [17]}, {'part_name': 'chair/chair_arm', 'leaf_id_list': [15]}, {'part_name': 'chair/chair_arm/arm_sofa_style', 'leaf_id_list': [15]}, {'part_name': 'chair/chair_arm', 'leaf_id_list': [14]}, {'part_name': 'chair/chair_arm/arm_sofa_style', 'leaf_id_list': [14]}, {'part_name': 'chair/chair_seat', 'leaf_id_list': [13, 12]}, {'part_name': 'chair/chair_seat/seat_surface', 'leaf_id_list': [13]}, {'part_name': 'chair/chair_seat/seat_surface/seat_single_surface', 'leaf_id_list': [13]}, {'part_name': 'chair/chair_seat/seat_support', 'leaf_id_list': [12]}, {'part_name': 'chair/chair_base', 'leaf_id_list': [7, 8, 9, 10]}, {'part_name': 'chair/chair_base/foot_base', 'leaf_id_list': [7, 8, 9, 10]}, {'part_name': 'chair/chair_base/foot_base/foot', 'leaf_id_list': [7]}, {'part_name': 'chair/chair_base/foot_base/foot', 'leaf_id_list': [8]}, {'part_name': 'chair/chair_base/foot_base/foot', 'leaf_id_list': [9]}, {'part_name': 'chair/chair_base/foot_base/foot', 'leaf_id_list': [10]}]

Is there a way to get those missing points?

How do I properly normalize PartNet point cloud to match the ShapeNet meshes?

I have attempted to use the normalize_pts() and the normalizeOBJ methods to normalize point cloud in the "sample-points-all-pts-label-10000.ply" files. Below are the results I obtained for both methods in order. I have read #19, #15, and #25 but failed to find the answer I am looking for. The closest match between the points and the mesh is in the second picture. However, there still seems to be a little bit of displacement between the points and the model. Is there a script available that matches the points almost exactly to the mesh?

I am certain that you are very busy, and I understand that maintaining a large-scale dataset is a lot of work. Therefore, I would appreciate any guidance I receive on the matter.
Screenshot from 2022-02-21 20-23-37
Screenshot from 2022-02-21 20-24-11

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.