simingyan / iae Goto Github PK
View Code? Open in Web Editor NEW[ICCV 2023] "Implicit Autoencoder for Point-Cloud Self-Supervised Representation Learning"
[ICCV 2023] "Implicit Autoencoder for Point-Cloud Self-Supervised Representation Learning"
hi,
I am curious of how to pertrain with other backbone such as PointNext. And, as shown in your paper, you experiment with PointNext and show a promising results. Could you release the PointNext pretrain code.
appreciate for your help.
Hi,
i try to verify the performance of pretrained model in your codebase DGCNN. The results i get are listed as follows:
Here, no matter the pretrained model is loaded or not, I got the almost result. Is there any thing i do wrong?
appreciate for your help.
Hi,
Thanks for your great job! I am confused with the below code in eval_step, where is the normailzed points and points_iou from?
# add pre-computed index
inputs = add_key(inputs, data.get('inputs.ind'), 'points', 'index', device=device)
# add pre-computed normalized coordinates
points = add_key(points, data.get('points.normalized'), 'p', 'p_n', device=device)
points_iou = add_key(points_iou, data.get('points_iou.normalized'), 'p', 'p_n', device=device)
Thanks again!
Hi, first of all thanks for sharing your code.
I'm trying to reproduce Linear evaluation on ModelNet40
I downloaded the dataset and run the evaluation:
python train_svm.py --encoder=dgcnn_cls --restore_path=./pretrained_models/modelnet40_svm.pt
And get the next result:
Transfer linear SVM accuracy: 92.06%
How to achieve the declared 94.2?
Hello!
Do you have any pretrained models I could look at ?
Hello, thank you for your great job!
I am wondering if you can also release a decoder model that you trained with the ShapeNet dataset. I could find an encoder model
By the way, I realized that you released a pre-trained model with a decoder under downstream/segmentation
, which I failed to load with dgcnn repo. Could you double-check?
Thank you!
Hi,
I would like to thank the authors for this excellent work. Could you please publish the weights of the pre-trained model?
Thank you!
Edit: I found it! Link
i load the pretrained model and finetune on the votenet using your codebase. However, the performance in Table 3 can not be reproduced.
the finetune script i use is
python train.py \
--dataset scannet --log_dir log_scannet \
--num_point 40000 --no_height \
--pre_checkpoint_path=~/pretrained_models/scannet.pt \
--batch_size=16
the train script i use is
python train.py --dataset scannet --log_dir log_scannet0 --num_point 40000 --no_height --batch_size=16
and the results i got is:
[email protected] | [email protected] | |
---|---|---|
load the pretrain model | 58.79 | 35.26 |
train from scrach | 55.61 | 32.92 |
Is anything wong?
appreciate your help.
I am trying to train an autoencoder on point clouds and use the features to classify them in different categories, I am confused in this case about which would be suitable for the encoder. I would be grateful if you could highlight which part of the code I should focus on.
Hi, thanks for your such great work.
i am trying to reproducing the results in Table 4 with the codebase and pretrained model . The finetune script i used is :
python main_semseg.py \
--exp_name=semseg_66 --test_area=6 \
--batch_size=16 \
--restore_path=pretrained_models/dgcnn_semseg.pt \
and the evaluation script:
python main_semseg.py \
--exp_name=semseg_eval --test_area=all \
--eval=True --model_root=outputs/semseg_66/models/model_6.t7
However, i the results higher than the one listed in Table4 :
OA | mIoU | |
---|---|---|
finetue with dgcnn_semseg.pt | 95.25 | 86.1 |
Is there anything wrong? why is it so high?
appreciate for your help .
Dear authors,
Thank you very much for your amazing contribution!
I am considering using your framework for an ongoing project, and I would be interested in using Point-M2AE in your setting. However, I could not find any config for the experiments reported in the paper with this model. Would you mind clarifying how I could run those experiments?
Thank you in advance!
David Romero
Hi, thanks for your great work. I try to fine-tune DGCNN with the given pretrained model(modelnet40_clsft.pt
) on ModelNet40. However, even if I follow the suggestions in #8, setting batch_size=32, k=40, num_points=2048, epoch=250
, the OA still cannot reach 94.2, which gives 93.4. By further comparing your released log with mine, I find the configuration of data_aug, scheduler
is different. I set data_aug=1
and scheduler=cos
following your log, and conduct the experiments again for twice, which gives OA = 93.6 and 93.5 respectively. The results are still different from the paper-reported 94.2 OA.
Meanwhile, I test DGCNN with the pretrained weights(modelnet40_trained.pt
), but it gives 94.0 OA and 91.1 Mean Class Acc, which is different from 94.2 OA and 91.6 Mean Class Acc in your released log.
Could you please give me some advice on reproducing the classfication results? Appreciate for your help!
Hi,
Thanks for your great work.
Would you mind telling us the possible code releasing date?
I am trying to use my own dataset that is saved in pcd format, how can I use it?
Hi, thanks for your contribution. I am trying to reproduce your fine-tuning results on ModelNet40. I follow your suggestion to prepare the ModelNet40 dataset, download your pre-trained models, and use your suggested command for fine-tuning. I conduct the experiments twice. As for the results, one is 93.1 and another is 93.0, which is different from your reported 94.2.
I carefully compare your released log with 94.2 ACC and my log. I find some differences.
I am wondering which difference plays such an important role in the ModelNet40 evaluation. Could you give me some suggestions? Thanks for your time.
Great work! And Could you provide the pretrained implicit autoencoder models (using ShapeNet and ScanNet Dateset) and the code that generate mesh from implicit output?
Hi, thanks for your great work !
I don't quite understand how you obtain the completion results on ScanNet (Figure 4 of your article): as far as I understand, the model outputs implicit function values for ambiant points (that is, points uniformly sampled in the unit cube), so the output is simply one scalar values in [-1, 1] for each ambiant point. But it seems you obtain surfaces. Is that correct ? And if so, how is that done ?
Hi,
thanks for this work!
I tried setting up this project and got an error for missing open3d package. I tried installing it and there was a conflict with pyyaml
.
which version of open3d did you use?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.