Giter Site home page Giter Site logo

anomalyclip's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

anomalyclip's Issues

Error with SDD.py

Dear author,

I encountered an issue with running SDD.py.

No such file or directory: 'data/sdd/electrical commutators/train'

Can you tell me why the SDD dataset has class "electrical commutators"? I can only see classnames such as "kos35", "kos36", etc. in the datasets.

Thank you.

Cannot obtain the meta.json for the SDD dataset

Good work! When I use the provided generate_dataset_json/SDD.py to generate meta.json, the lack of splited train and test datasets prevents the generation of meta.json. How can I obtain the train and test datasets? Additionally, some missing classes in the downloaded MPDD dataset are causing the code to fail. Could you provide the complete MPDD dataset?

Training Methodology Issue: Incorrect Dataset Mode

I've encountered an issue in the paper's code regarding the training approach for the anomaly detector on the MVTec AD dataset. The problem lies in the train.py script at line 35 where the Dataset object for training is created without specifying mode="train":

train_data = Dataset(root=args.train_data_path, transform=preprocess, target_transform=target_transform, dataset_name=args.dataset)

This oversight leads to two critical problems:

  1. The model mistakenly uses the test set for training.
  2. The anomaly detection framework is deprived of abnormal data during training, yet it encounters anomalies during testing. This inconsistency suggests that the model might be functioning as a simple classifier rather than performing anomaly detection.

Could this be revised to ensure the correct dataset partitioning and training setup?

Question about the reimplementation of the result of original CLIP

Hi there, Congrats for the great work!

In table 1, I ve noticed you also include the result of Original CLIP model
Screenshot 2024-06-20 at 08 55 23

Could you please share the setting of this experiments? Cuz my reimplementation based on your code shows lots of differences than yours. Like

  1. the size of CLIP (Base large huge?)
  2. The fixed text prompts you used. ( The "encode_text_with_prompt_ensemble" method you implemented?)
  3. any modifications on the vision side ? like DPAM

Thank you so much for your patience!

Question about the prefix template in text prompts

I sincerely thank you for your research and code sharing. As a medical doctor, I think your research can have a significant impact in the medical domain as well. I have read your paper and the reviews on the Openreview page, but I had a question because I couldn't grasp the details about the text prompt template (the part marked as V_1-V_E, W_1-W_E in the paper). Looking at the training code, it seems like that V_1 ~ V_E, W_1 ~ W_E are all filled with "X", is that correct?

P.S. I know how unexpectedly cumbersome and psychologically resistant it can be to organize and share experimental code after a paper is accepted, so I am truly grateful for sharing the code like this. Once again, congratulations on the ICLR acceptance.

Test with large dataset

When i test pixel-level with 3700 images, when calculate roc_auc_score, it is without RAM memory. How can I fix this? i use colab pro with 50gb ram

PermissionError: [Errno 13] Permission denied: '/remote-home'

Hello @zqhang

Thanks for your work. When i run the bash test.sh it gives permission error, please have a look maybe the code need to download the weight and there is no permission for it

bash test.sh
res.log
Namespace(data_path='/remote-home/iot_zhouqihang/data/mvdataset', save_path='./results/9_12_4_multiscale/zero_shot', checkpoint_path='./checkpoints/9_12_4_multiscale/epoch_15.pth', dataset='mvtec', features_list=[6, 12, 18, 24], image_size=518, depth=9, n_ctx=12, t_n_ctx=4, feature_map_layer=[0, 1, 2, 3], metrics='image-pixel-level', seed=111, sigma=4)
name ViT-L/14@336px
Traceback (most recent call last):
  File "/media/cvpr/CM_1/AnomalyCLIP/test.py", line 195, in <module>
    test(args)
  File "/media/cvpr/CM_1/AnomalyCLIP/test.py", line 43, in test
    model, _ = AnomalyCLIP_lib.load("ViT-L/14@336px", device=device, design_details = AnomalyCLIP_parameters)
  File "/media/cvpr/CM_1/AnomalyCLIP/AnomalyCLIP_lib/model_load.py", line 145, in load
    model_path = _download(_MODELS[name], download_root or os.path.expanduser("/remote-home/iot_zhouqihang/root/.cache/clip"))
  File "/media/cvpr/CM_1/AnomalyCLIP/AnomalyCLIP_lib/model_load.py", line 39, in _download
    os.makedirs(cache_dir, exist_ok=True)
  File "/home/cvpr/anaconda3/lib/python3.9/os.py", line 215, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/home/cvpr/anaconda3/lib/python3.9/os.py", line 215, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/home/cvpr/anaconda3/lib/python3.9/os.py", line 215, in makedirs
    makedirs(head, exist_ok=exist_ok)
  [Previous line repeated 1 more time]
  File "/home/cvpr/anaconda3/lib/python3.9/os.py", line 225, in makedirs
    mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/remote-home'
./checkpoints/9_12_4_multiscale/res.log
Namespace(data_path='/remote-home/iot_zhouqihang/data/Visa', save_path='./results/9_12_4_multiscale_visa/zero_shot', checkpoint_path='./checkpoints/9_12_4_multiscale_visa/epoch_15.pth', dataset='visa', features_list=[6, 12, 18, 24], image_size=518, depth=9, n_ctx=12, t_n_ctx=4, feature_map_layer=[0, 1, 2, 3], metrics='image-pixel-level', seed=111, sigma=4)
name ViT-L/14@336px
Traceback (most recent call last):
  File "/media/cvpr/CM_1/AnomalyCLIP/test.py", line 195, in <module>
    test(args)
  File "/media/cvpr/CM_1/AnomalyCLIP/test.py", line 43, in test
    model, _ = AnomalyCLIP_lib.load("ViT-L/14@336px", device=device, design_details = AnomalyCLIP_parameters)
  File "/media/cvpr/CM_1/AnomalyCLIP/AnomalyCLIP_lib/model_load.py", line 145, in load
    model_path = _download(_MODELS[name], download_root or os.path.expanduser("/remote-home/iot_zhouqihang/root/.cache/clip"))
  File "/media/cvpr/CM_1/AnomalyCLIP/AnomalyCLIP_lib/model_load.py", line 39, in _download
    os.makedirs(cache_dir, exist_ok=True)
  File "/home/cvpr/anaconda3/lib/python3.9/os.py", line 215, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/home/cvpr/anaconda3/lib/python3.9/os.py", line 215, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/home/cvpr/anaconda3/lib/python3.9/os.py", line 215, in makedirs
    makedirs(head, exist_ok=exist_ok)
  [Previous line repeated 1 more time]
  File "/home/cvpr/anaconda3/lib/python3.9/os.py", line 225, in makedirs
    mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/remote-home'

作者大大你好,请问zero-shot如何体现呢?

请问在您的工作中,zero-shot是如何体现的呢?既然是zero-shot,那么为什么会训练mvtec数据集,然后测试mvtec数据集的性能指标呢?最后还有一个问题,为什么在训练和测试的时候,都用的测试数据集呢?

Pure leakage in the test set

It seems that you used the entire test set to train encode_text_learn, while also using the test set for testing, which is seriously inconsistent with the ZERO-SHOT mentioned in your paper.Can you explain this issue

About loss, image_loss when train model?

I train with dataset visa,
I train 11 epoch, but loss and image_loss is 3.7960, 0.5325. I feel something wrong. I setup by your setting. Can you share me about loss and image_loss when u train?

Why divide 0.07

I don't understand why you divide right here
text_probs = image_features.unsqueeze(1) @ text_features.permute(0, 2, 1) text_probs = text_probs[:, 0, ...]/0.07 text_probs = text_probs.squeeze()
and another here
logit_scale = self.logit_scale.exp() # nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) logits_per_image = logit_scale * image_features @ text_features.t() logits_per_text = logits_per_image.t()

And in another paper , they use multi with 100, example
for layer in range(len(det_patch_tokens)): det_patch_tokens[layer] = det_patch_tokens[layer] / det_patch_tokens[layer].norm(dim=-1, keepdim=True) anomaly_map = (100.0 * det_patch_tokens[layer] @ text_features) anomaly_map = torch.softmax(anomaly_map, dim=-1)[:, :, 1] anomaly_score = torch.mean(anomaly_map, dim=-1) det_loss += loss_bce(anomaly_score, image_label)

ONNX export

First of all, thank you for your work and for releasing the AnomalyCLIP code!
I was wondering if it was possible to export the model to ONNX format? It could be very useful.

Thanks

how to test one img?

Input a picture, how to determine whether the picture is abnormal or not, how to write the code?

about prompt

I can't understand the following code.

self.register_buffer("token_prefix_pos", embedding_pos[:, :, :1, :] )
self.register_buffer("token_suffix_pos", embedding_pos[:, :, 1 + n_ctx_pos:, :])
self.register_buffer("token_prefix_neg", embedding_neg[:, :, :1, :])
self.register_buffer("token_suffix_neg", embedding_neg[:, :, 1 + n_ctx_neg:, :])

I think the positive prompt should be ['X X X X X X X X X X X X object.'], so the prefix should be 'X X X X X X X X X X X X ', and the suffix should be '.'.
I don't know if my understanding is wrong, can you help me to answer it?

Thresholding at inference time

Hi, it is not very clear to me how to extrapolate a threshold at segmentation level to discriminate whether a pixel is classified as “normal” or “anomaly". The output anomaly map needs to be normalized with something like this?

normalize_anomalymap = (anomalymap- anomalymap.min()) / (anomalymap.max() - anomalymap.min())

Thanks

maybe have a bug

我发现代码中 similarity_map_list里面的4个map,值都相等。原因可能是把中间的feature加入list时没有clone()?
image

The learnable token embeddings are attached to the first 9 layers of the text encoder for refining the textual space.

Ảnh chụp màn hình 2024-06-08 214758
First of all, I would like to thank you and your colleagues for your contributions to this domain. I have a question that in the Implementation details you said that The learnable token embeddings are attached to the first 9 layers of the text encoder for refining the textual space. but I only see that the 2nd (i = 1) -> 8th (i = 7) layers are attached. Can you explain it for me, thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.