Giter Site home page Giter Site logo

mala-lab / inctrl Goto Github PK

View Code? Open in Web Editor NEW
87.0 87.0 12.0 2.21 MB

Official implementation of CVPR'24 paper 'Toward Generalist Anomaly Detection via In-context Residual Learning with Few-shot Sample Prompts'.

License: Apache License 2.0

Python 100.00%
anomaly-detection few-shot-anomaly-detection foundation-models generalist-model image-anomaly-detection vision-language-model

inctrl's People

Contributors

diana1026 avatar guansongpang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

inctrl's Issues

License

Hi InCTRl team, @Diana1026 and @GuansongPang,

I read the paper with great interest and would like to investigate your model and possibly put it to use. However, there is no license for the code in this repository.

Do you intend to specify a license? And if so, when would you do it?

Thank you in advance for your work and best regards!

How is x.pt file generated with the extension python test.py --few_shot_dir

Thank you for doing a great job. I have a question here: First, I use my own few-shot normal sample training to verify defect detection. I need x.pt format file for the parameter --few_shot_dir in Python test.py. I don't know how to convert a normal sample to. pt?

few_shot_path = os.path.join(cfg.few_shot_dir, cfg.category+".pt")
normal_list = torch.load(few_shot_path)

Please give me help. thanks.

INT8

Hello, I want to speed up the model now. How can I modify it to quantize it to int8?

json generation fails

Dear InCTRL-Team @GuansongPang @Diana1026,

I wanted to set up your project on my own machine but bumped into an error.
When using your command in the Terminal
python gen_val_json.py --dataset_root=./visa_anomaly_detection/visa --dataset_name=candle
It says that the argument is not found. Quick look into the code and there is no dataset_name argument there.
What is the fix to my problem?

Best regards

the step 2 google drive

hello,the Step 2 Download the Few shot Normal Samples for Inference on [Google Drive],Where can I get the link?thanks

Few-shot Normal Images for Inference.

@Diana1026
Many thanks for your awesome work. I am currently following the abnormal detection protocol you defined.

However, per the public link on your GitHub page, it seems like you only released .pt files for few-shot normal samples, which do not have information (e.g., image names) to show which specific samples from each dataset were used to create these .pt files. In addition, the few-shot normal sample folder might not show .pt files for all $9$ datasets reported in your work. I would greatly appreciate it if you could share names of these few-shot samples used in the inference.

I was trying to contact you via email, but my Outlook email app says [email protected] is not a valid email address.

Few-shot Normal Samples for Inference - MVTec

I trained model using the Visa dataset and would like to check the results using the Mvtec dataset.

However, the Google Drive link provided for inference only has 6 types of datasets and the mvtec dataset does not exist.
(https://drive.google.com/drive/folders/1_RvmTqiCc4ZGa-Oq-uF7SOVotE1RW5QZ)

As a result of searching, I was able to find the few shot settings of the mvtec dataset in the link https://github.com/MediaBrain-SJTU/RegAD?tab=readme-ov-file

If you did not use the dataset from the link, can you share the dataset you used?

lower performance for Visa dataset validation

hello, i download the provided model and few-shot normal samples as you said in Readme, and i just test the candle datas from visa testset with 2 shot, and the tesetset is spilit by "1cls.csv", the result i get is AUC-ROC: 0.8773, AUC-PR: 0.8693; it's obviously lower than the results describled in paper: AUROC-0.916, AUPRC: 0.920.

I can not find out where the problem is, so could you give me some suggesionts for check?

Problems reproducing performance

Hello. Thank you for your excellent paper, which I read with great interest.

I conducted experiments to reproduce the performance described in the paper.

As a result, I found that the performance difference was significantly beyond the range of variation. (GPU used: RTX TITAN)

Training with MVTecAD and testing with VisA resulted in the following performance:
2-shot: 0.820 (AUROC), 0.837 (AUPRO), paper: 0.858(AUROC), 0.877(AUPRO)
4-shot: 0.832 (AUROC), 0.849 (AUPRO), paper: 0.877(AUROC), 0.902(AUPRO)
8-shot: 0.827 (AUROC), 0.848(AUPRO), paper: 0.887(AUROC), 0.904(AUPRO)

In addition, I did not change the default settings.

There might be an issue with the command I used, so I am attaching it as well. (I trained with all classes of MVTecAD and tested by VisA, The shot was directly modified in main.py.)

train command:
python main.py --normal_json_path D://InCTRL-main//data//mvtec_json//AD_json//bottle_normal.json D://InCTRL-main//data//mvtec_json//AD_json//cable_normal.json D://InCTRL-main//data//mvtec_json//AD_json//capsule_normal.json D://InCTRL-main//data//mvtec_json//AD_json//carpet_normal.json D://InCTRL-main//data//mvtec_json//AD_json//grid_normal.json D://InCTRL-main//data//mvtec_json//AD_json//hazelnut_normal.json D://InCTRL-main//data//mvtec_json//AD_json//leather_normal.json D://InCTRL-main//data//mvtec_json//AD_json//metal_nut_normal.json D://InCTRL-main//data//mvtec_json//AD_json//pill_normal.json D://InCTRL-main//data//mvtec_json//AD_json//screw_normal.json D://InCTRL-main//data//mvtec_json//AD_json//tile_normal.json D://InCTRL-main//data//mvtec_json//AD_json//toothbrush_normal.json D://InCTRL-main//data//mvtec_json//AD_json//transistor_normal.json D://InCTRL-main//data//mvtec_json//AD_json//wood_normal.json D://InCTRL-main//data//mvtec_json//AD_json//zipper_normal.json --outlier_json_path D://InCTRL-main//data//mvtec_json//AD_json//bottle_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//cable_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//capsule_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//carpet_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//grid_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//hazelnut_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//leather_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//metal_nut_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//pill_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//screw_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//tile_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//toothbrush_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//transistor_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//wood_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//zipper_outlier.json --val_normal_json_path D://InCTRL-main//data//mvtec_json//AD_json//bottle_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//cable_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//capsule_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//carpet_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//grid_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//hazelnut_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//leather_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//metal_nut_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//pill_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//screw_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//tile_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//toothbrush_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//transistor_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//wood_val_normal.json D://InCTRL-main//data//mvtec_json//AD_json//zipper_val_normal.json --val_outlier_json_path D://InCTRL-main//data//mvtec_json//AD_json//bottle_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//cable_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//capsule_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//carpet_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//grid_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//hazelnut_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//leather_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//metal_nut_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//pill_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//screw_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//tile_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//toothbrush_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//transistor_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//wood_val_outlier.json D://InCTRL-main//data//mvtec_json//AD_json//zipper_val_outlier.json

test command(repeated for each class):
python test.py --val_normal_json_path D://InCTRL-main//data//visa_json//AD_json//macaroni2_val_normal.json --val_outlier_json_path D://InCTRL-main//data//visa_json//AD_json//macaroni2_val_outlier.json --category macaroni2 --few_shot_dir D:/InCTRL-main/data/visa_inference/visa/8/

Is there an issue with the way I used the command?

The performance of using 4-shot or 8-shot on the Visa dataset is similar to that of 2-shot

Hello, I validated the 8-shot performance using the provided pre-trained model and few shot samples, and the results were similar to 2-shot, not as high as mentioned in the paper. I did the following: (1) checkpoints/8/checkpoint.pyth modified TEST CHECKPOINT-FILE-PATH (2) changed/fs_samples/visa/2/in the provided test command to/fs_samples/visa/8/, results are as follows:
image
Did I miss any operational steps?

Permission denied error because of CUDA?

When trying to run test.py i run into the error:
PermissionError: [Errno 13] Permission denied: ...

This happens in file_io.py when the open function is called
It tries to open the Checkpoint directory, which I configured in defaults.py

My question is how the configured path should look like.
Meaning if it should directly point to the .pt file or to the root of the checkpoints directory, etc..

Guidance on Training and Testing with Custom Dataset Similar to MVTec Format

Hello,

I am currently working on a project where I need to train and test a model using my custom dataset, which is structured similarly to the MVTec dataset format. I've been trying to adapt the workflow and methodologies used for the MVTec dataset to fit my dataset's requirements but have encountered some challenges, particularly in generating the custom_dataset.pt file.

Could anyone provide some insights or a step-by-step guide on how to:

Adapt the existing training and testing pipeline for a custom dataset that aligns with the MVTec format? Are there specific parameters or configurations that need to be adjusted in the code to accommodate the differences in the dataset?

Generate the few_shot.pt file for my dataset. What is the process or script used to create this file from the dataset? Are there specific requirements for the dataset structure or format to successfully generate this file?

For context, my dataset contains images and annotations that mirror the structure used in the MVTec dataset, including similar categories and anomaly types. My goal is to leverage the existing frameworks and tools used for MVTec to achieve comparable performance on my dataset.

I appreciate any advice, scripts, or documentation that could help me navigate these challenges. Thank you in advance for your time and assistance.

Best regards,

reproduce the code

Hello, could you please publish the training and testing process in detail, as well as the organization of the files and the associated json files, it's really not very good to reproduce the code

how to test one img?

How should I write the code to input an image and return the anomaly score? Please help me

the training process

Hello, your paper has inspired me a lot, and I would like to reproduce the code. So, I would like to ask whether executing python main.py --normal_json_path $normal-json-files-for-training --outlier_json_path $abnormal-json-files-for-training --val_normal_json_path $normal-json-files-for-testing --val_outlier_json_path $abnormal-json-files-for-testing during the training process requires running one JSON file for each category in each dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.