talreiss / panda Goto Github PK
View Code? Open in Web Editor NEWPANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation (CVPR 2021)
Home Page: https://arxiv.org/pdf/2010.05903.pdf
License: Other
PANDA: Adapting Pretrained Features for Anomaly Detection and Segmentation (CVPR 2021)
Home Page: https://arxiv.org/pdf/2010.05903.pdf
License: Other
Hi, I am experimenting with the outlier exposure code. It seems like the outlier exposure code does not use the KNN algorithm, and is simply a binary classification? Is this suppose to be the case? Thank you!!! Great paper ;)
Dear authors,
Thank you for your excellent work. I am slightly confused about the SPADE method and its results on MVTec.
In Table 4, you reported the segmentation AUROC and PRO of 96.2 and 91.7. However, in Table 8, you reported the PRO value of 97.3, which I think is impossible and 97.3 doesn't match any metric reported in the paper.
You compared cat and ensemble methods for SPADE and it seems the ensemble method improved 0.2 AUROC scores for segmentation, according to the previous arxiv version https://arxiv.org/pdf/2005.02357.pdf. However, you changed the kernel size of Gaussian filter from 4 to 5 and I am not sure about the influence of this change.
Also, as the images were resized to 256 and then cropped to 224, why metrics were still calculated at 256×256 image resolution? Will this setting improve the segmentation performance?
I am looking forward to your reply and thank you in advance!
Hi, thanks for your excellent work! I have a question to confirm.
During training on some datasets, I observe that both loss values and AUROC increase. I am wondering whether this phenomenon is reasonable. Could you please help explain it?
Thanks a lot!
Thank you for uploading. The code is well written and readable and I was able to reproduce the CIFAR10 results.
I am more interested in fine-grained datasets, as you reported in your paper in table 2. I have created a fork and added the Caltech Birds dataset (diff). I am using the first 20 classes as normal classes and the entire test set for evaluation as mentioned in appendix C. However, I am not able to reproduce your results. I only get an initial AUROC value of 76.9% and this gets worse during training, with and without --ewc
. This is still higher than the compared methods but much lower than the reported 95.3%.
Could you please take a look and see what/if I got something wrong, or maybe you could also provide your code for the datasets in table 2?
Thanks a lot
Thanks a lot!
Hi,
Thank you for sharing the implemented code with the paper.
As I try to get familiar with this field, I have a few questions about training and testing on the one-class Cifar-10 dataset. From Table 2 in sec 4.3, you mentioned that self-supervised methods didn't outperform the pre-trained ones, so you believe that using pre-trained methods is one of the cores of your work. In this case, is it legit that training a model on regular Cifar-10, which is standard Cifar-10 image classification, and then testing it on one-class Cifar-10 (maybe the criterion and Average ROC AUC are following from your code)?
Also, following the process, is it self-supervised or pre-trained by your knowledge?
Finally, correcting if I am wrong, I notice that the implemented code only evaluate one specific class at a time. Therefore, if I want to get the same Average ROC AUC% as Table 2, I have to average the ROC AUC from every class, right? For example, class 1: 90%, class 2: 91%, class 3: 90% class 4: 90% so the average ROC AUC is 90.5% by (90% + 91% + 90% + 91%)/4.
Many Thanks
Is the mAP value in paperwithcode really 95.9%?
Thank you for a job well done!
Only the Fisher Matrix of ResNet152 is available in the codebase,could you please provide the Fisher Matrix for resnet18 and resnet 34. Calculating this result myself requires a lot of memory.
Hi, do u have the detailed anomaly detection performance of each class on CIFAR10 Dataset Of PANDA method?
Hello :) I'm so insterested in your work from korea.
In your repo, Fisher matrix you share is limited to resnet only, and I want to try PANDA(PANDA-EWC) with efficientnet and other latest models.
Your paper does not specify how to create a pth file you shared.
Can you tell me how to make or keywords?
Thanks😊
Hi, Thanks for your great work. I notice that " A weakness of the simple early-stopping approach, is the reliance on a hyper-parameter that may not generalize to new datasets. Although the optimal stopping epoch can be determined with a validation set containing anomalies, it is not available in our setting". I don' understand why simple early stopping with validation set is not available in OOC setting. Validation set could be part of all normal images, and the change of validation loss is very small within seval epoches, then stop training. And i don't see the early stopping in code.
Hi, two more questions. Why not update center(torch.FloatTensor(feature_space).mean(dim=0)) of criterion during training? why the training loss is increasing no matter which dataset i use for training?
Hi, do u have the anomaly detection performance (image-level) of PANDA on Mvtec Dataset ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.