Comments (8)
from ssen.
Thank you for your quick reply!
- I rechecked the code, and I think the bug I found only occurs when initializing SemanticSegModel. For now, SemanticSegModel also loads weights except for the last layer, which is not intended. Overriding init_pretrained() in SemanticSegModel will fix this issue.
Few more questions ..
- How do you determine the semantic class of the predicted instance cluster? Class that mostly occurred in the cluster?
- You exclude voxels with instance label = 0 when computing loss. Could you explain why this exclusion is needed?
from ssen.
from ssen.
Thanks for your attention! 1. Training an instance segmentation model Training an instance segmentation model from scratch is not a very good idea. The usage of using a pretrained semantic segmentation model trained on scannet achieved far better performance compared to training from scratch. Since the instances segmentation model has the same network architecture as the semantic segmentation model (except for the final layer), I think the torch.load_state_dict should work as expected. The strict=False is set to not take account of the final layer. Could you check if the torch.load_state_dict(…, strict=False) changes all the parameters except for the last layer? If it does change, then I think I updated the correct code for training instance segmentation model, if not, you’re welcome to contact me and I’ll figure out what the bug is :) 2. Regarding evaluation code I currently do not have any plans for releasing the mAP code for validation split. It’s due to the fact that it’s really messy, and currently, I do not have time for cleaning the messed up code. I recommend you try the scanner original repo as a reference. Sincerely, Dongsu Zhang
…
On Dec 14, 2020, at 9:18 PM, Dongwon Kim @.***> wrote: First of all, thank you for sharing the nice code! In the paper, it is specified that the semantic model requires pre-trained MinkowskiNet. However, for instant segmentation model, I'm not so clear whether I have to use pre-trained weight or not. Training an instant segmentation model from scratch is enough to reproduce the result? Which setting did you use in your paper? In code, It seems like SSEN load pre-trained mikowskiNet weight at the start of training. However, I found out that loading weight files using init() code of SemanticSegModel or InstantSegModel class does not work properly (Maybe strict=False could be the reason since preprocessing_semnatic.py works well.). Do you have any plan to release the evaluation code(Which compute mAP) for validation split? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#3>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGKAYTLSGSEMZYJ24V52FGLSUX7CNANCNFSM4U2VA4RQ.
HEY! @96lives I have 2 questions now:
- According to your comment, Do you mean that we should also load a model pre-trained on semantic segmentation for our instance segmentation network(not only for the semantic segmentation network)?
- Why not just use groud truth semantic label to exclude the wall and floor?
Hope to get your reply!
from ssen.
-
You could train from the beginning, but we found out that using semantic segmentation pretraining helped us get a far better results. Especially, we used the pretraining by Choy et. al.
-
I believe your question is why not exclude the wall and floor during training using the ground truth semantic label, not testing (Otherwise it is obviously cheating to use ground truth labels doing test time).
During test time, SSEN observes semantic labels that was excluded by another semantic segmentation network, so to sync what the SSEN observes during test time with training time, we trained our network to use labels of the semantic network prediction.
Hope that answers the questions you asked!
from ssen.
- You could train from the beginning, but we found out that using semantic segmentation pretraining helped us get a far better results. Especially, we used the pretraining by Choy et. al.
- I believe your question is why not exclude the wall and floor during training using the ground truth semantic label, not testing (Otherwise it is obviously cheating to use ground truth labels doing test time).
During test time, SSEN observes semantic labels that was excluded by another semantic segmentation network, so to sync what the SSEN observes during test time with training time, we trained our network to use labels of the semantic network prediction.Hope that answers the questions you asked!
Thank you for your reply!
Could I assume that: I should load exactly the same model pre-trained on semantic segmentation for both instance and semantic networks?
if so, could I assume that loading a semantic model would also help an instance network to learn better? And should we freeze some layer after loading the model?
Hope to get your reply!
from ssen.
- I should load exactly the same model pre-trained on semantic segmentation for both instance and semantic networks?
Yes! Please go to the google drive.
- If so, could I assume that loading a semantic model would also help an instance network to learn better? And should we freeze some layer after loading the model?
Yes, we saw that the pretrained model performed better than the ones that don't, and no, we did not freeze some layers. We did not try freezing any layers and do not know how it would behave. We think the semantic segmentation pretrained model provides a good initial weight for instance segmentation model.
from ssen.
Many thanks!
I get it now. Good luck in future research!
Best,
Zhengdi
from ssen.
Related Issues (4)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ssen.