Giter Site home page Giter Site logo

iae's People

Contributors

simingyan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

iae's Issues

How to pretrain using PointNext backbone as encoder?

hi,
I am curious of how to pertrain with other backbone such as PointNext. And, as shown in your paper, you experiment with PointNext and show a promising results. Could you release the PointNext pretrain code.

appreciate for your help.

concerning to the effectiveness of pretrained model

Hi,
i try to verify the performance of pretrained model in your codebase DGCNN. The results i get are listed as follows:
Screenshot2022-12-20 10 25 07

Here, no matter the pretrained model is loaded or not, I got the almost result. Is there any thing i do wrong?

appreciate for your help.

Evaluation code.

Hi,
Thanks for your great job! I am confused with the below code in eval_step, where is the normailzed points and points_iou from?

        # add pre-computed index
        inputs = add_key(inputs, data.get('inputs.ind'), 'points', 'index', device=device)
        # add pre-computed normalized coordinates
        points = add_key(points, data.get('points.normalized'), 'p', 'p_n', device=device)
        points_iou = add_key(points_iou, data.get('points_iou.normalized'), 'p', 'p_n', device=device)

Thanks again!

Linear evaluation on ModelNet40

Hi, first of all thanks for sharing your code.

I'm trying to reproduce Linear evaluation on ModelNet40

I downloaded the dataset and run the evaluation:

python train_svm.py --encoder=dgcnn_cls --restore_path=./pretrained_models/modelnet40_svm.pt

And get the next result:

Transfer linear SVM accuracy: 92.06%

How to achieve the declared 94.2?

pretrained

Hello!
Do you have any pretrained models I could look at ?

trained ShapeNet decoder network for pretraining phase

Hello, thank you for your great job!

I am wondering if you can also release a decoder model that you trained with the ShapeNet dataset. I could find an encoder model $f$ for the pre-training stage, but a decoder model $g$.

By the way, I realized that you released a pre-trained model with a decoder under downstream/segmentation, which I failed to load with dgcnn repo. Could you double-check?

Thank you!

Publish Pretrained Model

Hi,

I would like to thank the authors for this excellent work. Could you please publish the weights of the pre-trained model?

Thank you!

Edit: I found it! Link

concerning reproducing the result of table 3

i load the pretrained model and finetune on the votenet using your codebase. However, the performance in Table 3 can not be reproduced.
the finetune script i use is

python train.py \
--dataset scannet --log_dir log_scannet \
--num_point 40000 --no_height \
--pre_checkpoint_path=~/pretrained_models/scannet.pt \
--batch_size=16

the train script i use is

python train.py --dataset scannet --log_dir log_scannet0 --num_point 40000 --no_height --batch_size=16 

and the results i got is:


  [email protected] [email protected]
load the pretrain model 58.79 35.26
train from scrach 55.61 32.92
     

Is anything wong?

appreciate your help.

Unsupervised learning on pointclouds

I am trying to train an autoencoder on point clouds and use the features to classify them in different categories, I am confused in this case about which would be suitable for the encoder. I would be grateful if you could highlight which part of the code I should focus on.

concerning the results in Table 4

Hi, thanks for your such great work.
i am trying to reproducing the results in Table 4 with the codebase and pretrained model . The finetune script i used is :

python main_semseg.py \
--exp_name=semseg_66 --test_area=6 \
--batch_size=16 \
--restore_path=pretrained_models/dgcnn_semseg.pt \

and the evaluation script:


python main_semseg.py \
--exp_name=semseg_eval --test_area=all \
--eval=True --model_root=outputs/semseg_66/models/model_6.t7 

However, i the results higher than the one listed in Table4 :

  OA mIoU
finetue with dgcnn_semseg.pt 95.25 86.1


Is there anything wrong? why is it so high?

appreciate for your help .

Point-M2AE Experiments?

Dear authors,

Thank you very much for your amazing contribution!

I am considering using your framework for an ongoing project, and I would be interested in using Point-M2AE in your setting. However, I could not find any config for the experiments reported in the paper with this model. Would you mind clarifying how I could run those experiments?

Thank you in advance!

David Romero

Reproduce classification fine-tuned results on ModelNet40

Hi, thanks for your great work. I try to fine-tune DGCNN with the given pretrained model(modelnet40_clsft.pt) on ModelNet40. However, even if I follow the suggestions in #8, setting batch_size=32, k=40, num_points=2048, epoch=250, the OA still cannot reach 94.2, which gives 93.4. By further comparing your released log with mine, I find the configuration of data_aug, scheduler is different. I set data_aug=1 and scheduler=cos following your log, and conduct the experiments again for twice, which gives OA = 93.6 and 93.5 respectively. The results are still different from the paper-reported 94.2 OA.

Meanwhile, I test DGCNN with the pretrained weights(modelnet40_trained.pt), but it gives 94.0 OA and 91.1 Mean Class Acc, which is different from 94.2 OA and 91.6 Mean Class Acc in your released log.

Could you please give me some advice on reproducing the classfication results? Appreciate for your help!

releasing date

Hi,

Thanks for your great work.
Would you mind telling us the possible code releasing date?

Use own dataset

I am trying to use my own dataset that is saved in pcd format, how can I use it?

Reproduce the results on ModelNet40 dataset

Hi, thanks for your contribution. I am trying to reproduce your fine-tuning results on ModelNet40. I follow your suggestion to prepare the ModelNet40 dataset, download your pre-trained models, and use your suggested command for fine-tuning. I conduct the experiments twice. As for the results, one is 93.1 and another is 93.0, which is different from your reported 94.2.

I carefully compare your released log with 94.2 ACC and my log. I find some differences.

  1. In your log, the model is fine-tuned for 250 epochs, while in the released code, the epoch is set as 200.
  2. In your log, batch size is set as 32, while in the released code, the epoch is set as 24.
  3. In your log, num_point is set as 2048 and k is set as 40, while in the released code, the number point is set as 1024 and k is set as 20.

I am wondering which difference plays such an important role in the ModelNet40 evaluation. Could you give me some suggestions? Thanks for your time.

Completion results

Hi, thanks for your great work !

I don't quite understand how you obtain the completion results on ScanNet (Figure 4 of your article): as far as I understand, the model outputs implicit function values for ambiant points (that is, points uniformly sampled in the unit cube), so the output is simply one scalar values in [-1, 1] for each ambiant point. But it seems you obtain surfaces. Is that correct ? And if so, how is that done ?

which open3d version?

Hi,
thanks for this work!
I tried setting up this project and got an error for missing open3d package. I tried installing it and there was a conflict with pyyaml.
which version of open3d did you use?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.