Giter Site home page Giter Site logo

defrec_and_pcm's People

Contributors

idanachituve avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

defrec_and_pcm's Issues

About

I couldn't get the visualizations I wanted after the reconstruction.The network doesn't seem to be converging.Could you please send me the weight parameters after pre-training

How to launch training

Hi, thank you for your work.
I am getting this error

Traceback (most recent call last):
File "PointDA/trainer.py", line 12, in
import utils.log
ModuleNotFoundError: No module named 'utils'

when using this command:
python PointDA/trainer.py

Experiment results in this paper

I noticed that the experiment results of "DefRec+PCM" is not consistent among Table1-4, especially Table 4 looks better, Is it due to the experimental settings are different, or other reasons?

About defrec.py

Dear Idan,
I couldn't find the other types of region selection methods in defrec.py
Can you tell me where I can find it?

About install the repo

Hi,
When i dont install the repo, i will meet problems in #7. But it works when i install the repo.
However, i check the setup.py, it doesn't add utils into environment, thats confused to me.

Would you please help me understand it?

About the compared method 'Reconstructing Space'

Hi, thanks for sharing this great work. Could you kindly share the implementation of 'Reconstructing Space' in your paper? I didn't find the offical codes for this paper.

Thank you so much!

BatchNorm PointSegDA

Hi! from the network implementation (PointSegDA experiments) I see that you pruned many BatchNorm layers which were in the original pytorch dgcnn implementation, for example in the main feature extractor 'shared_layers'. Is there any specific reason to do so? Have you found that removing BN layers the results are more stable across domains?
Thanks a lot

Different implementation of DGCNN

Thanks for your code. It is a really interesting work.

I found your implementation of DGCNN is different from the official implementation. It seems that you use the top branch of the segmentation network of DGCNN instead of the classification work.

The classification work may not contain the transform net and the embedding feature is the concatenation of the max pooling and average pooling features.

Is that right?

About the experiments results are greatly different from the default paraments.

Dear Idan,

I use the default parameters to run the code about PointSegDA task. The results are greatly different from those on the paper. Such as the results of FAUST to MIT mIOU: 0.6211 , in paper is 79.7 ± 0.3,MIT to FAUST our mIOU: 0.4400, in paper is 67.1 ± 1.0 . The results are very different, could you please help me to explain what causes it.
Could you show the paraments with the DefRec_weight, noisy_std, and so on.

Different results on Table 1 & 2

What is the difference of Table 1 and 2 in your paper? The results seem different but I cannot find the different experimental settings or descriptions. Thanks.

outdoor scenes Bad effect

I tried to apply this method to outdoor point cloud scenes, and found that this method occupied too much memory and miou did not improve much. Is this method not suitable for large-scale outdoor scenes?

Results for the baseline (unsupervised) from modelNet to shapeNet

Hi,

Nice work and thanks for sharing the code!

I have one question regarding the baseline results from ModelNet to ShapeNet. It seems that the baseline already achieved 83.3%. However in PointDA, the baseline is 42.5%. Also I noticed that the data for training are different -- the alignment has been done in your dataset. Would that be the reason for the significant improvement for that baseline?

Thanks!

About T-SNE

Hi, thanks for sharing this great work. Could you kindly share the implementation of 'T-SNE' in your paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.