Giter Site home page Giter Site logo

wgcban / ddpm-cd Goto Github PK

View Code? Open in Web Editor NEW
251.0 251.0 30.0 21.1 MB

Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models

Home Page: https://www.wgcban.com/research#h.ar24vwqlm021

License: MIT License

Python 100.00%
change-detection climate-change diffusion-models remote-sensing segmentation

ddpm-cd's Introduction

Welcome to My GitHub Profile!

My research interests lie at the intersection of Computer Vision and Deep Learning, with a specific focus on:

  • Self-Supervised Learning (SSL)
  • Generative AI
  • Remote Sensing
  • Low-Level Vision

🌐 Homepage: www.wgcban.com

Featured Projects

Self-Supervised Representation Learning

  • Guarding Barlow Twins Against Overfitting with Mixed Samples GitHub 🚨
  • AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders @ CVPR'23 GitHub

Generative AI

  • Unite and Conquer: Cross Dataset Multimodal Synthesis using Diffusion Models @ CVPR'23 GitHub

Remote Sensing

Change Detection

  • ChangeFormer: A Transformer-Based Siamese Network for Change Detection @ IGARSS'22 GitHub
  • DDPM-CD: Remote Sensing Change Detection using Denoising Diffusion Probabilistic Models GitHub
  • Revisiting Consistency Regularization for Semi-supervised Change Detection in Remote Sensing Images GitHub
  • Metric-CD: Deep Metric Learning for Unsupervised Remote Sensing Change Detection GitHub

Image Super-Resolution / Pansharpening

  • HyperTransformer: A Textural and Spectral Feature Fusion Transformer for Pansharpening @ CVPR'22 GitHub
  • DIP-HyperKite: Hyperspectral Pansharpening Based on Improved Deep Image Prior and Residual Reconstruction @ TGRS'22 GitHub

Segmentation

  • SPIN Road Mapper: Extracting Roads from Aerial Images via Spatial and Interaction Space Graph Reasoning for Autonomous Driving @ ICRA'22 GitHub

SAR Despeckling

  • SAR-Transformer: Transformer-based SAR Image Despeckling @ IGARSS'22 GitHub
  • SAR-Overcomplete: SAR Despeckling Using Overcomplete Convolutional Networks @ IGARSS'22 GitHub
  • SAR-DDPM: SAR Despeckling using a Denoising Diffusion Probabilistic Model @ Geoscience and Remote Sensing Letters GitHub

Low-level Vision

  • Diffuse-Denoise-Count: Accurate Crowd-Counting with Diffusion Models GitHub

ddpm-cd's People

Contributors

ayulockin avatar drew6017 avatar janspiry avatar mazeofencryption avatar wgcban avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

ddpm-cd's Issues

A question about precision and recall

Hello,I really appreciate your work. Could you share the precision and recall of your experimental results for different methods? Your help would be greatly appreciated! I look forward to your reply, thank you. my email is [email protected]

Data for pretraining the diffusion model

Dear author,

Thank you for the great work which has intrigued me a lot.
I would like to kindly ask if the data used for pretraining the diffusion model can be provided.

Best,
Jeff

About the expansion of the pre-training dataset

Dear Author:
Thank you so much for this exciting work!

We note that you have collected a large amount of optical remote sensing images to pre-train the diffusion model. In order to explore the migration capability of your proposed method on multi-source remote sensing data, we would like to further expand this dataset with SAR images and retrain the diffusion model, so we take the liberty to ask you if you can provide us with your pre-training dataset.

Gratefully thanks for your consideration, and look forward your reply.

Dr.Wu

about the theory

Thank you very much for your work, but I still have some problems with the paper: as far as I know the diffusion model is a generative model that can get images similar to the training dataset from random noise, but I don't understand this What is the relationship between this function and your thesis, that is to say, I don't understand the meaning of you using a large number of pictures to pre-train a diffusion model, and then use it to extract the features of the image.
(I don't mean any offense)
Looking forward to your reply, thank you

Test Image

Hi bro, I want to test whu dataset when I run:

python ddpm_cd.py --config config/whu_test.json --phase test -enable_wandb -log_eval

but there is an error, could you tell me how to fix it? thanks, bro

22-10-10 16:44:17.159 - INFO: Model [DDPM] is created.
22-10-10 16:44:17.159 - INFO: Initial Diffusion Model Finished
22-10-10 16:44:17.591 - INFO: Loading pretrained model for CD model [experiments/ddpm-RS-CDHead-WHU_221008_144806/checkpoint/cd_model_E79] ...
Traceback (most recent call last):
File "ddpm_cd.py", line 96, in
change_detection = Model.create_CD_model(opt)
File "/data/chengxi.han/Sigma124/ddpm-cd/model/init.py", line 13, in create_CD_model
m = M(opt)
File "/data/chengxi.han/Sigma124/ddpm-cd/model/cd_model.py", line 49, in init
self.load_network()
File "/data/chengxi.han/Sigma124/ddpm-cd/model/cd_model.py", line 161, in load_network
network.load_state_dict(torch.load(
File "/data/chengxi.han/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for cd_head_v2:
size mismatch for decoder.0.block.0.weight: copying a param with shape torch.Size([1024, 3072, 1, 1]) from checkpoint, the shape in cur model is torch.Size([1024, 2048, 1, 1]).
size mismatch for decoder.2.block.0.weight: copying a param with shape torch.Size([1024, 3072, 1, 1]) from checkpoint, the shape in cur model is torch.Size([1024, 2048, 1, 1]).
size mismatch for decoder.4.block.0.weight: copying a param with shape torch.Size([512, 1536, 1, 1]) from checkpoint, the shape in currmodel is torch.Size([512, 1024, 1, 1]).
size mismatch for decoder.6.block.0.weight: copying a param with shape torch.Size([256, 768, 1, 1]) from checkpoint, the shape in curreodel is torch.Size([256, 512, 1, 1]).
size mismatch for decoder.8.block.0.weight: copying a param with shape torch.Size([128, 384, 1, 1]) from checkpoint, the shape in curreodel is torch.Size([128, 256, 1, 1]).
wandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing.
wandb: Synced jolly-bee-43: https://wandb.ai/sigmahan/ddpm-RS-CDHead/runs/3h7lld6h
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./experiments/wandb/run-20221010_164400-3h7lld6h/logs

OutOfMemoryError

When doing fine-tuning, I'm wondering if the code for the change detection dataset is not complete enough. In levir-cd training, even if I reduce the image to 256 and set the batch-size to 1, a single 24G card memory is still not enough. Is it because the unet model parameters of DDPM are too heavy?
In particular, after the below message was output, the memory was maintained at 15G, then there is a memory overflow.

l_cd: 2.1025e-01 running_acc: 4.8673e-01 epoch_acc: 4.8607e-01 acc: 9.4579e-01 miou: 4.7289e-01 mf1: 4.8607e-01 iou_0: 9.4579e-01 iou_1: 0.0000e+00 F1_0: 9.7214e-01 F1_1: 0.0000e+00 precision_0: 9.4781e-01 precision_1: 0.0000e+00 recall_0: 9.9775e-01 recall_1: 0.0000e+00

Creating [train] change-detection dataloader.
23-02-08 01:27:18.810 - INFO: Dataset [CDDataset - LEVIR-CD-256 - train] is created.
Creating [val] change-detection dataloader.
23-02-08 01:27:18.814 - INFO: Dataset [CDDataset - LEVIR-CD-256 - val] is created.
23-02-08 01:27:18.814 - INFO: Initial Dataset Finished
23-02-08 01:27:24.129 - INFO: Initialization method [orthogonal]
23-02-08 01:27:35.712 - INFO: Loading pretrained model for G [/root/autodl-tmp/diffusion-model-I190000_E97] ...
23-02-08 01:27:41.615 - INFO: Model [DDPM] is created.
23-02-08 01:27:41.616 - INFO: Initial Diffusion Model Finished
23-02-08 01:27:42.054 - INFO: Initialization method [orthogonal]
23-02-08 01:27:43.104 - INFO: Cd Model [CD] is created.
23-02-08 01:27:43.105 - INFO: lr: 0.0001000

23-02-08 01:27:55.545 - INFO: [Training CD]. epoch: [0/119]. Itter: [0/445], CD_loss: 0.80819, running_mf1: 0.04829

23-02-08 01:30:22.239 - INFO: [Training CD (epoch summary)]: epoch: [0/119]. epoch_mF1=0.48607
l_cd: 2.1025e-01 running_acc: 4.8673e-01 epoch_acc: 4.8607e-01 acc: 9.4579e-01 miou: 4.7289e-01 mf1: 4.8607e-01 iou_0: 9.4579e-01 iou_1: 0.0000e+00 F1_0: 9.7214e-01 F1_1: 0.0000e+00 precision_0: 9.4781e-01 precision_1: 0.0000e+00 recall_0: 9.9775e-01 recall_1: 0.0000e+00

Traceback (most recent call last):
File "/home/ddpm-cd/ddpm_cd.py", line 225, in
fe_A_t, fd_A_t, fe_B_t, fd_B_t = diffusion.get_feats(t=t) #np.random.randint(low=2, high=8)
File "/home/ddpm-cd/model/model.py", line 91, in get_feats
fe_B, fd_B = self.netG.feats(B, t)
File "/root/miniconda3/envs/ddpm-cd/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/ddpm-cd/model/sr3_modules/diffusion.py", line 269, in feats
fe, fd = self.denoise_fn(x_noisy, continuous_sqrt_alpha_cumprod, feat_need=True)
File "/root/miniconda3/envs/ddpm-cd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ddpm-cd/model/sr3_modules/unet.py", line 271, in forward
x = layer(torch.cat((x, feats.pop()), dim=1), t)
File "/root/miniconda3/envs/ddpm-cd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ddpm-cd/model/sr3_modules/unet.py", line 155, in forward
x = self.res_block(x, time_emb)
File "/root/miniconda3/envs/ddpm-cd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ddpm-cd/model/sr3_modules/unet.py", line 107, in forward
h = self.block1(x)
File "/root/miniconda3/envs/ddpm-cd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ddpm-cd/model/sr3_modules/unet.py", line 91, in forward
return self.block(x)
File "/root/miniconda3/envs/ddpm-cd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/ddpm-cd/lib/python3.9/site-packages/torch/nn/modules/container.py", line 204, in forward
input = module(input)
File "/root/miniconda3/envs/ddpm-cd/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ddpm-cd/model/sr3_modules/unet.py", line 55, in forward
return x * torch.sigmoid(x)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 23.70 GiB total capacity; 19.31 GiB already allocated; 384.56 MiB free; 22.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Freeze module

Thank you for your great job!
Could you please tell me where the code which freezes the DDPM's weights?
I want to train the DDPM and CD at the same times.
Best regards

ddpm_cd

Thank you for your great work! What part of the code did you add noise to the image when you were training the CD?

Train/Val Reports on wandb can not be opened

Excuse me, I would like to know how much time it took to train the change detection model because Train/Val Reports on wandb won't open。When I trained on my own, it took a long time but the progress still didn't move

Hardware + training time?

Hello --

Are you able to share what hardware you used to train the model described in your paper, and what the wall-clock training time was?

Thanks!

question for 'GaussianDiffusion'

Hello, I have a problem again. When I first train the change detection using ddpm, which "which_model_G" is valued at "sr3", it succeeds. But when I change it to "ddpm", it says something wrong. I don't know what happened, the error code is as follows:

Traceback (most recent call last):
  File "E:\code\codes\CD\ddpm-cd-master\ddpm_cd.py", line 123, in <module>
    fe_A_t, fd_A_t, fe_B_t, fd_B_t = diffusion.get_feats(t=t) #np.random.randint(low=2, high=8)
  File "E:\code\codes\CD\ddpm-cd-master\model\model.py", line 89, in get_feats
    fe_A, fd_A = self.netG.feats(A, t)
  File "D:\Software\anaconda\envs\ENV01\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'GaussianDiffusion' object has no attribute 'feats'

I hope I can get your help。

TypeError: 'NoneType' object is not callable help

22-07-18 13:32:55.933 - INFO: Begin Model Evaluation (testing).
Traceback (most recent call last):
File "D:\project\ddpm-cd-master\ddpm_cd.py", line 344, in
change_detection.test()
File "D:\project\ddpm-cd-master\model\cd_model.py", line 78, in test
self.pred_cm = self.netCD(self.feats_A, self.feats_B)
File "C:\Users\10576\AppData\Local\conda\conda\envs\ddpm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "D:\project\ddpm-cd-master\model\cd_modules\cd_head_v2.py", line 107, in forward
diff = torch.abs( layer(f_A) - layer(f_B) )
File "C:\Users\10576\AppData\Local\conda\conda\envs\ddpm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "D:\project\ddpm-cd-master\model\cd_modules\cd_head_v2.py", line 57, in forward
return self.block(x)
File "C:\Users\10576\AppData\Local\conda\conda\envs\ddpm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\10576\AppData\Local\conda\conda\envs\ddpm\lib\site-packages\torch\nn\modules\container.py", line 141, in forward
input = module(input)
TypeError: 'NoneType' object is not callable

about fe_A_t, fd_A_t

Hi,
Thank you for your great work. I want to konw the meaning of fe_A_t, fd_A_t, and opt['model_cd']['feat_type'].

The Model Doesn't Fully Converge? Meaningless Pixel Patches.

Thank you for your nice work.

I have a question about the sampled images when pretraining (like the right part of the image below). You can see that the sampled image is like whole a pixel patch, does it mean the model doesn't fully converge? (actually some of them have some content like buildings or roads). I wonder if you encounter this problem.

This image is sampled from ddpm (unet model) trained around 20k iterations without condition. The left part is ground truth image and the right part is sampled from random gaussian noise with 1k time steps and cosine beta schedule.

It's quite strange that for nature image (such as COCO, ImageNet dataset), around 20k iterations training seems enough for model to generate a quite recognizable shapes of different objects. For me, I think it's due to the remote senseing datasets' distribution which is more diverse than categorical nature images.

looking forward to your reply.

Question for typo in paper

Dear Author

Hello, I am an UG student of POSTECH, Korea, and I am also an AI researcher in Meissaplanet.
Thank you for open source about interesting model, DDPM-cd.
As I read your paper, I found that there is some typo in it.

DDPMcd_typo

In line 4 for this paragraph, I think that the formula of posterior distribution should to be 'q(x_t|x_{t-1})', but you wrote it like image.

Have a nice day :)

lightweight head

@wgcban Hi, as for the design of lightweight head, can you share some experiences and tips? I want to follow the pipeline to do other tasks.

TypeError: 'NoneType' object is not callable help

22-07-18 13:32:55.933 - INFO: Begin Model Evaluation (testing).
Traceback (most recent call last):
File "D:\project\ddpm-cd-master\ddpm_cd.py", line 344, in
change_detection.test()
File "D:\project\ddpm-cd-master\model\cd_model.py", line 78, in test
self.pred_cm = self.netCD(self.feats_A, self.feats_B)
File "C:\Users\10576\AppData\Local\conda\conda\envs\ddpm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "D:\project\ddpm-cd-master\model\cd_modules\cd_head_v2.py", line 107, in forward
diff = torch.abs( layer(f_A) - layer(f_B) )
File "C:\Users\10576\AppData\Local\conda\conda\envs\ddpm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "D:\project\ddpm-cd-master\model\cd_modules\cd_head_v2.py", line 57, in forward
return self.block(x)
File "C:\Users\10576\AppData\Local\conda\conda\envs\ddpm\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\10576\AppData\Local\conda\conda\envs\ddpm\lib\site-packages\torch\nn\modules\container.py", line 141, in forward
input = module(input)
TypeError: 'NoneType' object is not callable

When loading your pretraining file, errors may occur in the following model file sections due to structural mismatch between your pretraining model directly under the link and the ddpm model under ddpm_cd.

        network.load_state_dict(torch.load(gen_path), strict=False)
        network.load_state_dict(torch.load(
            gen_path), strict=(not self.opt['model']['finetune_norm']))

            
        if self.opt['phase'] == 'train':
            #optimizer
            opt = torch.load(opt_path)
            self.optG.load_state_dict(opt['optimizer'])
            self.begin_step = opt['iter']
            self.begin_epoch = opt['epoch']

经debug查看strict=(not self.opt['model']['finetune_norm'])该语句执行严格匹配不通过,注释该语句后opt['optimizer']输入与初始化的optG参数格式确实不匹配,希望作者大大帮忙看一看谢谢

Prediction Images Are All Black

I have cloned the repository and downloaded all the necessary pre-trained models and datasets as per the documentation. However, when I run the code, it executes without any errors but the prediction images generated are just black.

1
2

TypeError: unsupported operand type(s) for %: 'int' and 'NoneType'

Refer to your code,when train diffusion ,Running the code gives an error in the " diffusion = Model.create_model(opt)" ,debug the error ,the unet has mistake,some mistakes takes as below:
:Traceback (most recent call last):
File "F:\Diffusion model-based semantic segmentation for polarization imaging\fs_image.py", line 80, in
diffusion = Model.create_model(opt)
File "F:\Diffusion model-based semantic segmentation for polarization imaging\model_init_.py", line 7, in create_model
m = M(opt)
File "F:\Diffusion model-based semantic segmentation for polarization imaging\model\model.py", line 18, in init
self.netG = self.set_device(networks.define_G(opt))
File "F:\Diffusion model-based semantic segmentation for polarization imaging\model\network.py", line 98, in define_G
image_size=model_opt['diffusion']['image_size']
File "F:\Diffusion model-based semantic segmentation for polarization imaging\model\ddpm_modules\unet.py", line 208, in init
dropout=dropout, with_attn=use_attn))
File "F:\Diffusion model-based semantic segmentation for polarization imaging\model\ddpm_modules\unet.py", line 151, in init
dim, dim_out, noise_level_emb_dim, norm_groups=norm_groups, dropout=dropout)
File "F:\Diffusion model-based semantic segmentation for polarization imaging\model\ddpm_modules\unet.py", line 101, in init
self.block1 = Block(dim, dim_out, groups=norm_groups,dropout=dropout)
File "F:\Diffusion model-based semantic segmentation for polarization imaging\model\ddpm_modules\unet.py", line 85, in init
nn.GroupNorm(groups, dim),
File "D:\Anaconda\envs\diffusion_model_semantic\lib\site-packages\torch\nn\modules\normalization.py", line 251, in init
if num_channels % num_groups != 0:
TypeError: unsupported operand type(s) for %: 'int' and 'NoneType'

about the meaning of the feature images' names

Hi,
I am interested in your work. And I have a question about the meaning of the feature images' names.
I got many feature images. And I want to know the meaning of the 'i' and the 'level'. Is it possible to map it to the 'multi-scale feature'?
image

Practicality of the model on 64x64 images

Thank you for your work. I benefit a lot from it. I now want to train with 64x64 or smaller images. Does this model work? Do I need to change the channel_multiplier module in the Unet module? It means a lot to me. I appreciate it.

About can't load the trained model.

I've trained the model on the code, but I can't load any trained model in checkpoint even the pre-trained model you given.

RuntimeError: Error(s) in loading state_dict for GaussianDiffusion: Missing key(s) in state_dict: "betas", "alphas_cumprod", "alphas_cumprod_prev", "sqrt_alphas_cumprod", "sqrt_one_minus_alphas_cumprod", "log_one_minus_alphas_cumprod", "sqrt_recip_alphas_cumprod", "sqrt_recipm1_alphas_cumprod", "posterior_variance", "posterior_log_variance_clipped", "posterior_mean_coef1", "posterior_mean_coef2", "denoise_fn.noise_level_mlp.1.weight", "denoise_fn.noise_level_mlp.1.bias", "denoise_fn.noise_level_mlp.3.weight", "denoise_fn.noise_level_mlp.3.bias", "denoise_fn.init_conv.weight", "denoise_fn.init_conv.bias", "denoise_fn.downs.0.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.0.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.0.res_block.block1.block.0.weight", "denoise_fn.downs.0.res_block.block1.block.0.bias", "denoise_fn.downs.0.res_block.block1.block.3.weight", "denoise_fn.downs.0.res_block.block1.block.3.bias", "denoise_fn.downs.0.res_block.block2.block.0.weight", "denoise_fn.downs.0.res_block.block2.block.0.bias", "denoise_fn.downs.0.res_block.block2.block.3.weight", "denoise_fn.downs.0.res_block.block2.block.3.bias", "denoise_fn.downs.1.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.1.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.1.res_block.block1.block.0.weight", "denoise_fn.downs.1.res_block.block1.block.0.bias", "denoise_fn.downs.1.res_block.block1.block.3.weight", "denoise_fn.downs.1.res_block.block1.block.3.bias", "denoise_fn.downs.1.res_block.block2.block.0.weight", "denoise_fn.downs.1.res_block.block2.block.0.bias", "denoise_fn.downs.1.res_block.block2.block.3.weight", "denoise_fn.downs.1.res_block.block2.block.3.bias", "denoise_fn.downs.2.conv.weight", "denoise_fn.downs.2.conv.bias", "denoise_fn.downs.3.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.3.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.3.res_block.block1.block.0.weight", "denoise_fn.downs.3.res_block.block1.block.0.bias", "denoise_fn.downs.3.res_block.block1.block.3.weight", "denoise_fn.downs.3.res_block.block1.block.3.bias", "denoise_fn.downs.3.res_block.block2.block.0.weight", "denoise_fn.downs.3.res_block.block2.block.0.bias", "denoise_fn.downs.3.res_block.block2.block.3.weight", "denoise_fn.downs.3.res_block.block2.block.3.bias", "denoise_fn.downs.3.res_block.res_conv.weight", "denoise_fn.downs.3.res_block.res_conv.bias", "denoise_fn.downs.4.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.4.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.4.res_block.block1.block.0.weight", "denoise_fn.downs.4.res_block.block1.block.0.bias", "denoise_fn.downs.4.res_block.block1.block.3.weight", "denoise_fn.downs.4.res_block.block1.block.3.bias", "denoise_fn.downs.4.res_block.block2.block.0.weight", "denoise_fn.downs.4.res_block.block2.block.0.bias", "denoise_fn.downs.4.res_block.block2.block.3.weight", "denoise_fn.downs.4.res_block.block2.block.3.bias", "denoise_fn.downs.5.conv.weight", "denoise_fn.downs.5.conv.bias", "denoise_fn.downs.6.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.6.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.6.res_block.block1.block.0.weight", "denoise_fn.downs.6.res_block.block1.block.0.bias", "denoise_fn.downs.6.res_block.block1.block.3.weight", "denoise_fn.downs.6.res_block.block1.block.3.bias", "denoise_fn.downs.6.res_block.block2.block.0.weight", "denoise_fn.downs.6.res_block.block2.block.0.bias", "denoise_fn.downs.6.res_block.block2.block.3.weight", "denoise_fn.downs.6.res_block.block2.block.3.bias", "denoise_fn.downs.6.res_block.res_conv.weight", "denoise_fn.downs.6.res_block.res_conv.bias", "denoise_fn.downs.7.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.7.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.7.res_block.block1.block.0.weight", "denoise_fn.downs.7.res_block.block1.block.0.bias", "denoise_fn.downs.7.res_block.block1.block.3.weight", "denoise_fn.downs.7.res_block.block1.block.3.bias", "denoise_fn.downs.7.res_block.block2.block.0.weight", "denoise_fn.downs.7.res_block.block2.block.0.bias", "denoise_fn.downs.7.res_block.block2.block.3.weight", "denoise_fn.downs.7.res_block.block2.block.3.bias", "denoise_fn.downs.8.conv.weight", "denoise_fn.downs.8.conv.bias", "denoise_fn.downs.9.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.9.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.9.res_block.block1.block.0.weight", "denoise_fn.downs.9.res_block.block1.block.0.bias", "denoise_fn.downs.9.res_block.block1.block.3.weight", "denoise_fn.downs.9.res_block.block1.block.3.bias", "denoise_fn.downs.9.res_block.block2.block.0.weight", "denoise_fn.downs.9.res_block.block2.block.0.bias", "denoise_fn.downs.9.res_block.block2.block.3.weight", "denoise_fn.downs.9.res_block.block2.block.3.bias", "denoise_fn.downs.9.res_block.res_conv.weight", "denoise_fn.downs.9.res_block.res_conv.bias", "denoise_fn.downs.10.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.10.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.10.res_block.block1.block.0.weight", "denoise_fn.downs.10.res_block.block1.block.0.bias", "denoise_fn.downs.10.res_block.block1.block.3.weight", "denoise_fn.downs.10.res_block.block1.block.3.bias", "denoise_fn.downs.10.res_block.block2.block.0.weight", "denoise_fn.downs.10.res_block.block2.block.0.bias", "denoise_fn.downs.10.res_block.block2.block.3.weight", "denoise_fn.downs.10.res_block.block2.block.3.bias", "denoise_fn.downs.11.conv.weight", "denoise_fn.downs.11.conv.bias", "denoise_fn.downs.12.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.12.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.12.res_block.block1.block.0.weight", "denoise_fn.downs.12.res_block.block1.block.0.bias", "denoise_fn.downs.12.res_block.block1.block.3.weight", "denoise_fn.downs.12.res_block.block1.block.3.bias", "denoise_fn.downs.12.res_block.block2.block.0.weight", "denoise_fn.downs.12.res_block.block2.block.0.bias", "denoise_fn.downs.12.res_block.block2.block.3.weight", "denoise_fn.downs.12.res_block.block2.block.3.bias", "denoise_fn.downs.12.attn.norm.weight", "denoise_fn.downs.12.attn.norm.bias", "denoise_fn.downs.12.attn.qkv.weight", "denoise_fn.downs.12.attn.out.weight", "denoise_fn.downs.12.attn.out.bias", "denoise_fn.downs.13.res_block.noise_func.noise_func.0.weight", "denoise_fn.downs.13.res_block.noise_func.noise_func.0.bias", "denoise_fn.downs.13.res_block.block1.block.0.weight", "denoise_fn.downs.13.res_block.block1.block.0.bias", "denoise_fn.downs.13.res_block.block1.block.3.weight", "denoise_fn.downs.13.res_block.block1.block.3.bias", "denoise_fn.downs.13.res_block.block2.block.0.weight", "denoise_fn.downs.13.res_block.block2.block.0.bias", "denoise_fn.downs.13.res_block.block2.block.3.weight", "denoise_fn.downs.13.res_block.block2.block.3.bias", "denoise_fn.downs.13.attn.norm.weight", "denoise_fn.downs.13.attn.norm.bias", "denoise_fn.downs.13.attn.qkv.weight", "denoise_fn.downs.13.attn.out.weight", "denoise_fn.downs.13.attn.out.bias", "denoise_fn.mid.0.res_block.noise_func.noise_func.0.weight", "denoise_fn.mid.0.res_block.noise_func.noise_func.0.bias", "denoise_fn.mid.0.res_block.block1.block.0.weight", "denoise_fn.mid.0.res_block.block1.block.0.bias", "denoise_fn.mid.0.res_block.block1.block.3.weight", "denoise_fn.mid.0.res_block.block1.block.3.bias", "denoise_fn.mid.0.res_block.block2.block.0.weight", "denoise_fn.mid.0.res_block.block2.block.0.bias", "denoise_fn.mid.0.res_block.block2.block.3.weight", "denoise_fn.mid.0.res_block.block2.block.3.bias", "denoise_fn.mid.0.attn.norm.weight", "denoise_fn.mid.0.attn.norm.bias", "denoise_fn.mid.0.attn.qkv.weight", "denoise_fn.mid.0.attn.out.weight", "denoise_fn.mid.0.attn.out.bias", "denoise_fn.mid.1.res_block.noise_func.noise_func.0.weight", "denoise_fn.mid.1.res_block.noise_func.noise_func.0.bias", "denoise_fn.mid.1.res_block.block1.block.0.weight", "denoise_fn.mid.1.res_block.block1.block.0.bias", "denoise_fn.mid.1.res_block.block1.block.3.weight", "denoise_fn.mid.1.res_block.block1.block.3.bias", "denoise_fn.mid.1.res_block.block2.block.0.weight", "denoise_fn.mid.1.res_block.block2.block.0.bias", "denoise_fn.mid.1.res_block.block2.block.3.weight", "denoise_fn.mid.1.res_block.block2.block.3.bias", "denoise_fn.ups.0.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.0.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.0.res_block.block1.block.0.weight", "denoise_fn.ups.0.res_block.block1.block.0.bias", "denoise_fn.ups.0.res_block.block1.block.3.weight", "denoise_fn.ups.0.res_block.block1.block.3.bias", "denoise_fn.ups.0.res_block.block2.block.0.weight", "denoise_fn.ups.0.res_block.block2.block.0.bias", "denoise_fn.ups.0.res_block.block2.block.3.weight", "denoise_fn.ups.0.res_block.block2.block.3.bias", "denoise_fn.ups.0.res_block.res_conv.weight", "denoise_fn.ups.0.res_block.res_conv.bias", "denoise_fn.ups.0.attn.norm.weight", "denoise_fn.ups.0.attn.norm.bias", "denoise_fn.ups.0.attn.qkv.weight", "denoise_fn.ups.0.attn.out.weight", "denoise_fn.ups.0.attn.out.bias", "denoise_fn.ups.1.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.1.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.1.res_block.block1.block.0.weight", "denoise_fn.ups.1.res_block.block1.block.0.bias", "denoise_fn.ups.1.res_block.block1.block.3.weight", "denoise_fn.ups.1.res_block.block1.block.3.bias", "denoise_fn.ups.1.res_block.block2.block.0.weight", "denoise_fn.ups.1.res_block.block2.block.0.bias", "denoise_fn.ups.1.res_block.block2.block.3.weight", "denoise_fn.ups.1.res_block.block2.block.3.bias", "denoise_fn.ups.1.res_block.res_conv.weight", "denoise_fn.ups.1.res_block.res_conv.bias", "denoise_fn.ups.1.attn.norm.weight", "denoise_fn.ups.1.attn.norm.bias", "denoise_fn.ups.1.attn.qkv.weight", "denoise_fn.ups.1.attn.out.weight", "denoise_fn.ups.1.attn.out.bias", "denoise_fn.ups.2.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.2.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.2.res_block.block1.block.0.weight", "denoise_fn.ups.2.res_block.block1.block.0.bias", "denoise_fn.ups.2.res_block.block1.block.3.weight", "denoise_fn.ups.2.res_block.block1.block.3.bias", "denoise_fn.ups.2.res_block.block2.block.0.weight", "denoise_fn.ups.2.res_block.block2.block.0.bias", "denoise_fn.ups.2.res_block.block2.block.3.weight", "denoise_fn.ups.2.res_block.block2.block.3.bias", "denoise_fn.ups.2.res_block.res_conv.weight", "denoise_fn.ups.2.res_block.res_conv.bias", "denoise_fn.ups.2.attn.norm.weight", "denoise_fn.ups.2.attn.norm.bias", "denoise_fn.ups.2.attn.qkv.weight", "denoise_fn.ups.2.attn.out.weight", "denoise_fn.ups.2.attn.out.bias", "denoise_fn.ups.3.conv.weight", "denoise_fn.ups.3.conv.bias", "denoise_fn.ups.4.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.4.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.4.res_block.block1.block.0.weight", "denoise_fn.ups.4.res_block.block1.block.0.bias", "denoise_fn.ups.4.res_block.block1.block.3.weight", "denoise_fn.ups.4.res_block.block1.block.3.bias", "denoise_fn.ups.4.res_block.block2.block.0.weight", "denoise_fn.ups.4.res_block.block2.block.0.bias", "denoise_fn.ups.4.res_block.block2.block.3.weight", "denoise_fn.ups.4.res_block.block2.block.3.bias", "denoise_fn.ups.4.res_block.res_conv.weight", "denoise_fn.ups.4.res_block.res_conv.bias", "denoise_fn.ups.5.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.5.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.5.res_block.block1.block.0.weight", "denoise_fn.ups.5.res_block.block1.block.0.bias", "denoise_fn.ups.5.res_block.block1.block.3.weight", "denoise_fn.ups.5.res_block.block1.block.3.bias", "denoise_fn.ups.5.res_block.block2.block.0.weight", "denoise_fn.ups.5.res_block.block2.block.0.bias", "denoise_fn.ups.5.res_block.block2.block.3.weight", "denoise_fn.ups.5.res_block.block2.block.3.bias", "denoise_fn.ups.5.res_block.res_conv.weight", "denoise_fn.ups.5.res_block.res_conv.bias", "denoise_fn.ups.6.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.6.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.6.res_block.block1.block.0.weight", "denoise_fn.ups.6.res_block.block1.block.0.bias", "denoise_fn.ups.6.res_block.block1.block.3.weight", "denoise_fn.ups.6.res_block.block1.block.3.bias", "denoise_fn.ups.6.res_block.block2.block.0.weight", "denoise_fn.ups.6.res_block.block2.block.0.bias", "denoise_fn.ups.6.res_block.block2.block.3.weight", "denoise_fn.ups.6.res_block.block2.block.3.bias", "denoise_fn.ups.6.res_block.res_conv.weight", "denoise_fn.ups.6.res_block.res_conv.bias", "denoise_fn.ups.7.conv.weight", "denoise_fn.ups.7.conv.bias", "denoise_fn.ups.8.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.8.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.8.res_block.block1.block.0.weight", "denoise_fn.ups.8.res_block.block1.block.0.bias", "denoise_fn.ups.8.res_block.block1.block.3.weight", "denoise_fn.ups.8.res_block.block1.block.3.bias", "denoise_fn.ups.8.res_block.block2.block.0.weight", "denoise_fn.ups.8.res_block.block2.block.0.bias", "denoise_fn.ups.8.res_block.block2.block.3.weight", "denoise_fn.ups.8.res_block.block2.block.3.bias", "denoise_fn.ups.8.res_block.res_conv.weight", "denoise_fn.ups.8.res_block.res_conv.bias", "denoise_fn.ups.9.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.9.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.9.res_block.block1.block.0.weight", "denoise_fn.ups.9.res_block.block1.block.0.bias", "denoise_fn.ups.9.res_block.block1.block.3.weight", "denoise_fn.ups.9.res_block.block1.block.3.bias", "denoise_fn.ups.9.res_block.block2.block.0.weight", "denoise_fn.ups.9.res_block.block2.block.0.bias", "denoise_fn.ups.9.res_block.block2.block.3.weight", "denoise_fn.ups.9.res_block.block2.block.3.bias", "denoise_fn.ups.9.res_block.res_conv.weight", "denoise_fn.ups.9.res_block.res_conv.bias", "denoise_fn.ups.10.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.10.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.10.res_block.block1.block.0.weight", "denoise_fn.ups.10.res_block.block1.block.0.bias", "denoise_fn.ups.10.res_block.block1.block.3.weight", "denoise_fn.ups.10.res_block.block1.block.3.bias", "denoise_fn.ups.10.res_block.block2.block.0.weight", "denoise_fn.ups.10.res_block.block2.block.0.bias", "denoise_fn.ups.10.res_block.block2.block.3.weight", "denoise_fn.ups.10.res_block.block2.block.3.bias", "denoise_fn.ups.10.res_block.res_conv.weight", "denoise_fn.ups.10.res_block.res_conv.bias", "denoise_fn.ups.11.conv.weight", "denoise_fn.ups.11.conv.bias", "denoise_fn.ups.12.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.12.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.12.res_block.block1.block.0.weight", "denoise_fn.ups.12.res_block.block1.block.0.bias", "denoise_fn.ups.12.res_block.block1.block.3.weight", "denoise_fn.ups.12.res_block.block1.block.3.bias", "denoise_fn.ups.12.res_block.block2.block.0.weight", "denoise_fn.ups.12.res_block.block2.block.0.bias", "denoise_fn.ups.12.res_block.block2.block.3.weight", "denoise_fn.ups.12.res_block.block2.block.3.bias", "denoise_fn.ups.12.res_block.res_conv.weight", "denoise_fn.ups.12.res_block.res_conv.bias", "denoise_fn.ups.13.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.13.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.13.res_block.block1.block.0.weight", "denoise_fn.ups.13.res_block.block1.block.0.bias", "denoise_fn.ups.13.res_block.block1.block.3.weight", "denoise_fn.ups.13.res_block.block1.block.3.bias", "denoise_fn.ups.13.res_block.block2.block.0.weight", "denoise_fn.ups.13.res_block.block2.block.0.bias", "denoise_fn.ups.13.res_block.block2.block.3.weight", "denoise_fn.ups.13.res_block.block2.block.3.bias", "denoise_fn.ups.13.res_block.res_conv.weight", "denoise_fn.ups.13.res_block.res_conv.bias", "denoise_fn.ups.14.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.14.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.14.res_block.block1.block.0.weight", "denoise_fn.ups.14.res_block.block1.block.0.bias", "denoise_fn.ups.14.res_block.block1.block.3.weight", "denoise_fn.ups.14.res_block.block1.block.3.bias", "denoise_fn.ups.14.res_block.block2.block.0.weight", "denoise_fn.ups.14.res_block.block2.block.0.bias", "denoise_fn.ups.14.res_block.block2.block.3.weight", "denoise_fn.ups.14.res_block.block2.block.3.bias", "denoise_fn.ups.14.res_block.res_conv.weight", "denoise_fn.ups.14.res_block.res_conv.bias", "denoise_fn.ups.15.conv.weight", "denoise_fn.ups.15.conv.bias", "denoise_fn.ups.16.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.16.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.16.res_block.block1.block.0.weight", "denoise_fn.ups.16.res_block.block1.block.0.bias", "denoise_fn.ups.16.res_block.block1.block.3.weight", "denoise_fn.ups.16.res_block.block1.block.3.bias", "denoise_fn.ups.16.res_block.block2.block.0.weight", "denoise_fn.ups.16.res_block.block2.block.0.bias", "denoise_fn.ups.16.res_block.block2.block.3.weight", "denoise_fn.ups.16.res_block.block2.block.3.bias", "denoise_fn.ups.16.res_block.res_conv.weight", "denoise_fn.ups.16.res_block.res_conv.bias", "denoise_fn.ups.17.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.17.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.17.res_block.block1.block.0.weight", "denoise_fn.ups.17.res_block.block1.block.0.bias", "denoise_fn.ups.17.res_block.block1.block.3.weight", "denoise_fn.ups.17.res_block.block1.block.3.bias", "denoise_fn.ups.17.res_block.block2.block.0.weight", "denoise_fn.ups.17.res_block.block2.block.0.bias", "denoise_fn.ups.17.res_block.block2.block.3.weight", "denoise_fn.ups.17.res_block.block2.block.3.bias", "denoise_fn.ups.17.res_block.res_conv.weight", "denoise_fn.ups.17.res_block.res_conv.bias", "denoise_fn.ups.18.res_block.noise_func.noise_func.0.weight", "denoise_fn.ups.18.res_block.noise_func.noise_func.0.bias", "denoise_fn.ups.18.res_block.block1.block.0.weight", "denoise_fn.ups.18.res_block.block1.block.0.bias", "denoise_fn.ups.18.res_block.block1.block.3.weight", "denoise_fn.ups.18.res_block.block1.block.3.bias", "denoise_fn.ups.18.res_block.block2.block.0.weight", "denoise_fn.ups.18.res_block.block2.block.0.bias", "denoise_fn.ups.18.res_block.block2.block.3.weight", "denoise_fn.ups.18.res_block.block2.block.3.bias", "denoise_fn.ups.18.res_block.res_conv.weight", "denoise_fn.ups.18.res_block.res_conv.bias", "denoise_fn.final_conv.block.0.weight", "denoise_fn.final_conv.block.0.bias", "denoise_fn.final_conv.block.3.weight", "denoise_fn.final_conv.block.3.bias". Unexpected key(s) in state_dict: "decoder.0.block.0.weight", "decoder.0.block.0.bias", "decoder.0.block.2.weight", "decoder.0.block.2.bias", "decoder.1.block.0.weight", "decoder.1.block.0.bias", "decoder.1.block.2.cSE.fc1.weight", "decoder.1.block.2.cSE.fc1.bias", "decoder.1.block.2.cSE.fc2.weight", "decoder.1.block.2.cSE.fc2.bias", "decoder.1.block.2.sSE.conv.weight", "decoder.1.block.2.sSE.conv.bias", "decoder.2.block.0.weight", "decoder.2.block.0.bias", "decoder.2.block.2.weight", "decoder.2.block.2.bias", "decoder.3.block.0.weight", "decoder.3.block.0.bias", "decoder.3.block.2.cSE.fc1.weight", "decoder.3.block.2.cSE.fc1.bias", "decoder.3.block.2.cSE.fc2.weight", "decoder.3.block.2.cSE.fc2.bias", "decoder.3.block.2.sSE.conv.weight", "decoder.3.block.2.sSE.conv.bias", "decoder.4.block.0.weight", "decoder.4.block.0.bias", "decoder.4.block.2.weight", "decoder.4.block.2.bias", "decoder.5.block.0.weight", "decoder.5.block.0.bias", "decoder.5.block.2.cSE.fc1.weight", "decoder.5.block.2.cSE.fc1.bias", "decoder.5.block.2.cSE.fc2.weight", "decoder.5.block.2.cSE.fc2.bias", "decoder.5.block.2.sSE.conv.weight", "decoder.5.block.2.sSE.conv.bias", "decoder.6.block.0.weight", "decoder.6.block.0.bias", "decoder.6.block.2.weight", "decoder.6.block.2.bias", "decoder.7.block.0.weight", "decoder.7.block.0.bias", "decoder.7.block.2.cSE.fc1.weight", "decoder.7.block.2.cSE.fc1.bias", "decoder.7.block.2.cSE.fc2.weight", "decoder.7.block.2.cSE.fc2.bias", "decoder.7.block.2.sSE.conv.weight", "decoder.7.block.2.sSE.conv.bias", "decoder.8.block.0.weight", "decoder.8.block.0.bias", "decoder.8.block.2.weight", "decoder.8.block.2.bias", "clfr_stg1.weight", "clfr_stg1.bias", "clfr_stg2.weight", "clfr_stg2.bias".

FileNotFoundError: [Errno 2] No such file or directory: 'dataset/CDD-256/A\\filenames.txt'

22-07-19 12:54:06.763 - INFO: Cd Model [CD] is created.
22-07-19 12:54:06.763 - INFO: Begin Model Evaluation (testing).
Traceback (most recent call last):
File "D:\test\ddpm-cd-master\ddpm_cd.py", line 326, in
for current_step, test_data in enumerate(test_loader):
File "C:\Anaconda3\envs\torch\lib\site-packages\torch\utils\data\dataloader.py", line 530, in next
data = self._next_data()
File "C:\Anaconda3\envs\torch\lib\site-packages\torch\utils\data\dataloader.py", line 1224, in _next_data
return self._process_data(data)
File "C:\Anaconda3\envs\torch\lib\site-packages\torch\utils\data\dataloader.py", line 1250, in _process_data
data.reraise()
File "C:\Anaconda3\envs\torch\lib\site-packages\torch_utils.py", line 457, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "C:\Anaconda3\envs\torch\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "C:\Anaconda3\envs\torch\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Anaconda3\envs\torch\lib\site-packages\torch\utils\data_utils\fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\test\ddpm-cd-master\data\CDDataset.py", line 65, in getitem
img_A = Image.open(A_path).convert("RGB")
File "C:\Anaconda3\envs\torch\lib\site-packages\PIL\Image.py", line 2953, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'dataset/CDD-256/A\filenames.txt'

i try all of the model is great but just CDD-test happen this.bother again.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.