yuzheng9 / c2pnet Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2023] Curricular Contrastive Regularization for Physics-aware Single Image Dehazing
[CVPR 2023] Curricular Contrastive Regularization for Physics-aware Single Image Dehazing
Is there an accurate time for open training code?
Hello, when training ITS, the copied command reported an error.
log_dir : logs/'its_train'_C2PNet_3_19_default_clcr
model_name: 'its_train'C2PNet_3_19_default_clcr
Traceback (most recent call last):
File "main.py", line 172, in
loader_train = loaders[opt.trainset]
KeyError: "'its_train'"
In file main.py, I notice that the return sentence may be not right. The test() only tests one image in loader_test.
def test(test_model, loader_test):
test_model.eval()
torch.cuda.empty_cache()
ssims = []
psnrs = []
for i, (inputs, targets) in enumerate(loader_test):
inputs = inputs.to(opt.device)
targets = targets.to(opt.device)
pred = test_model(inputs)
ssim1 = ssim(pred, targets).item()
psnr1 = psnr(pred, targets)
ssims.append(ssim1)
psnrs.append(psnr1)
return np.mean(ssims), np.mean(psnrs)
The PSNR is used to calculate the curriculum_weight. I'm not sure if there is an unexpected indent before 'return'?
Average PSNR is 34.073019217253574
Average SSIM is 0.9861119985580444
这是我使用你们的方法和权重在SOTS outdoor上测试的结果,和文中说的不一致,可否解释一下?我不认为是机器配置的差别。
我想制作定制pkl,pkl文件是用什么源代码制作的?
您好!我想请问一下是否可以这样选择negatives:假设我已经搭建好了一个模型,先只用L1 Loss对其进行训练,我保留了训练过程中各个Epoch的网络参数。我选择train PSNR大概为25、30、31、32、33的权重所生成的dehazed images作为negatives。这样起到的效果是否可以和从其他去雾方法生成的negatives一样呢?谢谢您的回答。
作者,你好。
非常感谢提供如此优秀的工作以及对应工作的代码。
目前推理已经跑通,我想快速验证训练和理解源码。但是,训练数据集文件its.lmdb 偏大,是否能提供一个mini 版[比如20 张有雾图以及对应所有正负样本]的图片数据集[非lmdb,因为可以通过create_lmdb 进行制作。]?
非常感谢你的支持和帮助,期待你的回复,不甚感激。
首先感谢您出色的工作!
目前在NH-HAZE2的官网无法获取对应的验证集、测试集的GT,能否请您将NH-HAZE2数据一并上传到Google Drive呢?不胜感激。
您好,请问论文中提到使用RESIDE数据集的ITS和OTS是哪个版本的,RESIDE数据集官网有reside-v0、reside-standard、RESIDE-β三个版本,不同版本的ITS和OTS haze图像数量也不同;另外想请问下在您提供的ITS lmdb(89G)包含这么多负样本的数据集中,论文提到仅使用一张3090,需要训练多久呢?谢谢!
Thank you so much for your awesome work! I would like to ask when the training code will be released.
Best wishes.
为什么一直CUDA out of memory.
Tried to allocate 70.00 MiB (GPU 0; 16.00 GiB total capacity; 29.04 GiB already allocated; 0 bytes free; 29.11 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
这个用什么训练的,论文不是说一个3090吗
Hi,
I am very interested in your work on the fog removal task, but during the study, I found it was not easy to obtain OTS data, because I could not find other network model weights that were consistent with yours. Could you please send me the OTS.LMDB file? Or when your paper deals with network model weights.
Thank you very much.
学数据集的时候怎么学雾照?是亲自制造雾吗?
您好,我想问一下如何用自己的数据集进行训练呢?先生成LMDB文件吗?那负样本怎么生成呢?
/root/miniconda3/bin/python3 /root/.pycharm_helpers/pydev/pydevd.py --multiprocess --client localhost --port 41281 --file /root/autodl-tmp/main.py
Connected to pydev debugger (build 231.9225.15)
Namespace(ablation_type=0, beta1=0.9, beta2=0.999, blocks=19, bs=2, cl_lambda=0.25, clcrloss=False, clip=False, crop=False, crop_size=240, dataset_dir='./data/', device='cuda', eval_step=5000, gps=3, latest_model_dir='./trained_models/its_train_C2PNet_3_19_default_latest.pk', loss_weight=0.2, lr=0.0001, model_dir='./trained_models/its_train_C2PNet_3_19_default.pk', name='default', net='C2PNet', no_lr_sche=False, norm=False, resume=True, steps=1000000, testset='its_test', trainset='its_train')
model_dir: ./trained_models/its_train_C2PNet_3_19_default.pk
2
crop size whole_img
Process finished with exit code 255
请问有没有人出现这个错误啊?
Are the experimental results on the real datasets Dense-Haze and NH-Haze2 in your paper directly obtained by testing the two models of OTS and ITS that you have announced, or did you also do it on these two real datasets? Training, but the model has not been published; thank you!
您好作者,我有一个疑问就是代码里的rate(non-consensual,1:10;consensual, 1:7)是怎么控制的呢?
论文使用了不用的比例(1:7; 1:10),但是training batch 其实是2;
It's a really good job!!!But I have a question.
In the main.py file, diff_list[] in curriculum_weight() function has 6 fixed PSNR. I think the negatives PSNR of each training picture may be different, so why are there 6 fixed values here?
Thank you for your answer!
何时公开代码
Could you provide a Baidu Cloud or Google Drive link for the NH-haze2 dataset?
由于您并没有公开OTS数据集,故我试图使用create_lmdb.py
来制作数据集。我想请问一下pred_FFANEW_its
这个路径下存放的图像是使用哪个模型跑出来的呢,是否有预训练模型?
你好,为什么在ITS数据集上训练时,复制的命令报错。
model_dir: ./trained_models/'its_train'_C2PNet_3_19_default_clcr.pk
2
crop size 240
crop size whole img
crop size whole img
log_dir : logs/'its_train'_C2PNet_3_19_default_clcr
model_name: 'its_train'C2PNet_3_19_default_clcr
Traceback (most recent call last):
File "main.py", line 172, in
loader_train = loaders[opt.trainset]
KeyError: "'its_train'"
在您的create_lmdb.py文件中,存在有n1-n6个不同的路径,是使用了6个不同方法的去雾之后的图像吗?那可以选择使用其他的方法的去雾数据当作negative样本吗?
Hi, thank you for the share of your awesome work! I have a small question about your code, according to Figure 2 in your paper, the features of the first 33 convolution are connected to the final output of the last 33 convolution. However, in your code, it appears that the input itself is connected to the last convolution instead. Could you explain why this is?
请问一下,论文中的NH-HAZE2的数据哪里可以下载,还有就是其他方法在该数据集上的结果是怎么得到的?谢谢~
您好!关于对比损失的权重我有一个问题,当权重为0.2时,我复现的结果 对比损失 远远大于 l1 损失,并且发现得到的图片质量出现失真伪影,请问一下,对比损失 远远大于 l1 损失 是否可行,还是说,到了训练后期 情况有所改善呢?感谢您的回答!
您好,请问您可以将制作的OTS.lmdb的数据集也上传到百度网盘吗?万分感谢!祝科研顺利!
因为找不到lmdb,所以无法回main.py 。 还有想制作train pkl,所以制作了源代码,不能用这个源代码学习吗?
带着学习的数据集上传到C2PNet,显示为白色。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.