Giter Site home page Giter Site logo

llflow's Introduction

PWC

[AAAI 2022 Oral] Low-Light Image Enhancement with Normalizing Flow

Low-Light Image Enhancement with Normalizing Flow
Yufei Wang, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-pui Chau, Alex C. Kot
In AAAI'2022

Overall

Framework

Quantitative results

Evaluation on LOL

The evauluation results on LOL are as follows

Method PSNR SSIM LPIPS
LIME 16.76 0.56 0.35
RetinexNet 16.77 0.56 0.47
DRBN 20.13 0.83 0.16
Kind 20.87 0.80 0.17
KinD++ 21.30 0.82 0.16
LLFlow (Ours) 25.19 0.93 (0.86) 0.11

(Our method measures SSIM value on grayscale images by strictly following SSIM's official code. We also assume that all LoL methods follow this offical code. Since some recent Python implementations of SSIM also provide the option for non-grayscale images, the results obtained using such unofficial implementations have been provided within brackets.)

Computational Cost

Computational Cost

The computational cost and performance of models are in the above table. We evaluate the cost using one image with a size 400×600. Ours(large) is the standard model reported in supplementary and Ours(small) is a model with reduced parameters. Both the training config files and pre-trained models are provided.

Visual Results

Visual comparison with state-of-the-art low-light image enhancement methods on LOL dataset.

Get Started

Dependencies and Installation

  • Python 3.8
  • Pytorch 1.9
  1. Clone Repo
git clone https://github.com/wyf0912/LLFlow.git
  1. Create Conda Environment
conda create --name LLFlow python=3.8
conda activate LLFlow
  1. Install Dependencies
cd LLFlow
pip install -r ./code/requirements.txt

Dataset

You can refer to the following links to download the datasets LOL, and LOL-v2.

Pretrained Model

We provide the pre-trained models with the following settings:

  • A light weight model with promising performance trained on LOL [Google drive] with training config file ./confs/LOL_smallNet.yml
  • A standard-sized model trained on LOL [Google drive] with training config file ./confs/LOL-pc.yml.
  • A standard-sized model trained on LOL-v2 [Google drive] with training config file ./confs/LOLv2-pc.yml.

Test

You can check the training log to obtain the performance of the model. You can also directly test the performance of the pre-trained model as follows

  1. Modify the paths to dataset and pre-trained mode. You need to modify the following path in the config files in ./confs
#### Test Settings
dataroot_unpaired # needed for testing with unpaired data
dataroot_GT # needed for testing with paired data
dataroot_LR # needed for testing with paired data
model_path
  1. Test the model

To test the model with paired data and obtain the evaluation results, e.g., PSNR, SSIM, and LPIPS. You need to specify the data path dataroot_LR, dataroot_GT, and model path model_path in the config file. Then run

python test.py --opt your_config_path
# You need to specify an appropriate config file since it stores the config of the model, e.g., the number of layers.

To test the model with unpaired data, you need to specify the unpaired data path dataroot_unpaired, and model path model_path in the config file. Then run

python test_unpaired.py --opt your_config_path -n results_folder_name
# You need to specify an appropriate config file since it stores the config of the model, e.g., the number of layers.

You can check the output in ../results.

Train

All logging files in the training process, e.g., log message, checkpoints, and snapshots, will be saved to ./experiments.

  1. Modify the paths to dataset in the config yaml files. We provide the following training configs for both LOL and LOL-v2 benchmarks. You can also create your own configs for your own dataset.
.\confs\LOL_smallNet.yml
.\confs\LOL-pc.yml
.\confs\LOLv2-pc.yml

You need to modify the following terms

datasets.train.root
datasets.val.root
gpu_ids: [0] # Our model can be trained using a single GPU with memory>20GB. You can also train the model using multiple GPUs by adding more GPU ids in it.
  1. Train the network.
python train.py --opt your_config_path

Citation

If you find our work useful for your research, please cite our paper

@article{wang2021low,
  title={Low-Light Image Enhancement with Normalizing Flow},
  author={Wang, Yufei and Wan, Renjie and Yang, Wenhan and Li, Haoliang and Chau, Lap-Pui and Kot, Alex C},
  journal={arXiv preprint arXiv:2109.05923},
  year={2021}
}

Contact

If you have any question, please feel free to contact us via [email protected]. (Please use your institution of shool email address to avoid being classfied as junk emails.)

llflow's People

Contributors

wyf0912 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

llflow's Issues

Question about color map calculation

Thank you for posting the code of the paper.
I have some doubts about the color map and would like to consult, as follows:
image Figure 1
image Figure 2
Q1 Is it possible to repeat torch.cat in 1 and 3 in Figure 1 ?
Q2 What is the effect of calculating the input exponent to get raw_low_input , 2 in Figure 1 ?
Q3 Does mean_c(x) in the paper correspond to x.sum in the code? example Figure 2

NaN or Inf found in input tensor.

Hello, sorry to bother you again. I recently encountered a new error in training, do you know why?
image
The number of wrong epoch is different each time.

Cross-Dataset Evaluation

Hello, I would like to know if you used all the test sets of VE-LOL when doing cross-validation? or some other subset

VE-LOL dataset

Thank you for your excellent works, May I ask how to download VE-LOL dataset ?
thanks in advance.

ve lol dataset

您好,在LOL数据集训练之后迁移到VELOL数据集测试的时候,是用Cap+Syn都进行了测试吗还是只选用Cap集呢?另外单独VELOL训练用的是哪一部分呢?

关于LOL数据集

您好,我从您给的链接中下载到的LOL训练集约有789张图片,但是您论文中提到LOL训练集应包含485张图片,请问这中间存在什么问题吗?

RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 22)

I have a problem when I run the training code.
After 4,000 epochs, I got an error.
This is the log on training.

`OrderedDict([('manual_seed', 10), ('lr_G', 0.0005), ('weight_decay_G', 0), ('beta1', 0.9), ('beta2', 0.99), ('lr_scheme', 'MultiStepLR'), ('warmup_iter', 10), ('lr_steps_rel', [0.5, 0.75, 0.9, 0.95]), ('lr_gamma', 0.5), ('weight_l1', 0), ('weight_fl', 1), ('niter', 30000), ('val_freq', 200), ('lr_steps', [15000, 22500, 27000, 28500])])
Disabled distributed training.
22-05-17 21:58:49.336 - INFO: name: train_color_as_full_z_nosieMapBugFixed_noavgpool
use_tb_logger: True
model: LLFlow
distortion: sr
scale: 1
gpu_ids: [0]
dataset: LoL
optimize_all_z: False
cond_encoder: ConEncoder1
train_gt_ratio: 0.2
avg_color_map: False
concat_histeq: True
histeq_as_input: False
concat_color_map: False
gray_map: False
align_condition_feature: False
align_weight: 0.001
align_maxpool: True
to_yuv: False
encode_color_map: False
le_curve: False
datasets:[
train:[
root: data/LOL
quant: 32
use_shuffle: True
n_workers: 4
batch_size: 16
use_flip: True
color: RGB
use_crop: True
GT_size: 160
noise_prob: 0
noise_level: 5
log_low: True
gamma_aug: False
phase: train
scale: 1
data_type: img
]
val:[
root: data/LOL
n_workers: 1
quant: 32
n_max: 20
batch_size: 1
log_low: True
phase: val
scale: 1
data_type: img
]
]
dataroot_unpaired: data/LOL/eval15/low
dataroot_GT: data/LOL/eval15/high
dataroot_LR: data/LOL/eval15/low
model_path: trained_models/trained.pth
heat: 0
network_G:[
which_model_G: LLFlow
in_nc: 3
out_nc: 3
nf: 64
nb: 24
train_RRDB: False
train_RRDB_delay: 0.5
flow:[
K: 12
L: 3
noInitialInj: True
coupling: CondAffineSeparatedAndCond
additionalFlowNoAffine: 2
split:[
enable: False
]
fea_up0: True
stackRRDB:[
blocks: [1, 3, 5, 7]
concat: True
]
]
scale: 1
]
path:[
strict_load: True
resume_state: auto
root: /home/jaemin/Desktop/LLFlow-main
experiments_root: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool
models: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool/models
training_state: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool/training_state
log: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool
val_images: /home/jaemin/Desktop/LLFlow-main/experiments/train_color_as_full_z_nosieMapBugFixed_noavgpool/val_images
]
train:[
manual_seed: 10
lr_G: 0.0005
weight_decay_G: 0
beta1: 0.9
beta2: 0.99
lr_scheme: MultiStepLR
warmup_iter: 10
lr_steps_rel: [0.5, 0.75, 0.9, 0.95]
lr_gamma: 0.5
weight_l1: 0
weight_fl: 1
niter: 30000
val_freq: 200
lr_steps: [15000, 22500, 27000, 28500]
]
val:[
n_sample: 4
]
test:[
heats: [0.0, 0.7, 0.8, 0.9]
]
logger:[
print_freq: 200
save_checkpoint_freq: 1000.0
]
is_train: True
dist: False

22-05-17 21:58:49.351 - INFO: Random seed: 10
rrdb params 0
22-05-17 21:58:56.276 - INFO: Model [LLFlowModel] is created.
Parameters of full network 38.8595 and encoder 17.4968
22-05-17 21:58:56.286 - INFO: Start training from epoch: 0, iter: 0
/home/jaemin/anaconda3/envs/mymodel/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:136: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
<epoch: 0, iter: 1, lr:5.000e-05, t:-1.00e+00, td:2.79e-01, eta:-8.33e+00, nll:0.000e+00>
/home/jaemin/anaconda3/envs/mymodel/lib/python3.7/site-packages/torch/nn/functional.py:3103: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
warnings.warn("The default behavior for interpolate/upsample with float scale_factor changed "
22-05-17 21:59:00.339 - INFO: Parameters of full network 37.6774 and encoder 17.4211
<epoch: 0, iter: 2, lr:1.000e-04, t:-1.00e+00, td:5.03e-03, eta:-8.33e+00, nll:-7.233e+00>
<epoch: 0, iter: 3, lr:1.500e-04, t:3.81e+00, td:5.90e-03, eta:3.17e+01, nll:-6.932e+00>
<epoch: 0, iter: 4, lr:2.000e-04, t:1.40e+00, td:1.65e-03, eta:1.16e+01, nll:-7.222e+00>
<epoch: 0, iter: 5, lr:2.500e-04, t:1.53e+00, td:1.21e-03, eta:1.28e+01, nll:9.519e+02>
<epoch: 0, iter: 6, lr:3.000e-04, t:1.44e+00, td:7.74e-03, eta:1.20e+01, nll:1.007e+03>
<epoch: 0, iter: 7, lr:3.500e-04, t:1.40e+00, td:1.87e-03, eta:1.17e+01, nll:8.534e+02>
<epoch: 0, iter: 8, lr:4.000e-04, t:1.43e+00, td:1.58e-03, eta:1.19e+01, nll:9.690e+02>
<epoch: 0, iter: 9, lr:4.500e-04, t:1.39e+00, td:1.63e-03, eta:1.16e+01, nll:6.836e+02>
<epoch: 0, iter: 10, lr:4.500e-04, t:1.43e+00, td:1.73e-03, eta:1.20e+01, nll:5.680e+02>
<epoch: 0, iter: 11, lr:4.500e-04, t:1.42e+00, td:1.61e-03, eta:1.19e+01, nll:5.967e+02>
<epoch: 0, iter: 12, lr:4.500e-04, t:1.39e+00, td:1.52e-03, eta:1.16e+01, nll:5.568e+02>
<epoch: 0, iter: 13, lr:4.500e-04, t:1.43e+00, td:1.57e-03, eta:1.19e+01, nll:4.397e+02>
<epoch: 0, iter: 14, lr:4.500e-04, t:1.39e+00, td:2.70e-03, eta:1.16e+01, nll:5.610e+02>
<epoch: 0, iter: 15, lr:4.500e-04, t:1.49e+00, td:1.53e-03, eta:1.24e+01, nll:1.028e+02>
<epoch: 0, iter: 16, lr:4.500e-04, t:1.54e+00, td:1.48e-03, eta:1.28e+01, nll:1.578e+01>
<epoch: 0, iter: 17, lr:4.500e-04, t:1.53e+00, td:1.65e-03, eta:1.28e+01, nll:1.097e+01>
<epoch: 0, iter: 18, lr:4.500e-04, t:1.50e+00, td:1.56e-03, eta:1.25e+01, nll:1.119e+01>
<epoch: 0, iter: 19, lr:4.500e-04, t:1.53e+00, td:2.73e-03, eta:1.28e+01, nll:7.515e+00>
<epoch: 0, iter: 20, lr:4.500e-04, t:1.49e+00, td:2.75e-03, eta:1.24e+01, nll:3.359e+00>
<epoch: 0, iter: 21, lr:4.500e-04, t:1.52e+00, td:2.58e-03, eta:1.27e+01, nll:1.562e+00>
<epoch: 0, iter: 22, lr:4.500e-04, t:1.48e+00, td:2.86e-03, eta:1.24e+01, nll:-2.830e-01>
<epoch: 0, iter: 23, lr:4.500e-04, t:1.53e+00, td:2.14e-03, eta:1.27e+01, nll:-1.822e+00>
<epoch: 0, iter: 24, lr:4.500e-04, t:1.50e+00, td:2.55e-03, eta:1.25e+01, nll:-2.389e+00>
<epoch: 6, iter: 200, lr:4.500e-04, t:1.57e+00, td:1.48e-02, eta:1.30e+01, nll:-1.198e+01>
train.py:291: RuntimeWarning: divide by zero encountered in float_scalars
cropped_sr_img_adjust = np.clip(cropped_sr_img * (mean_gray_gt / mean_gray_out), 0, 1)
train.py:291: RuntimeWarning: invalid value encountered in multiply
cropped_sr_img_adjust = np.clip(cropped_sr_img * (mean_gray_gt / mean_gray_out), 0, 1)
22-05-17 22:04:39.757 - INFO: # Validation # PSNR: nan SSIM: nan
22-05-17 22:04:39.758 - INFO: <epoch: 6, iter: 200> psnr: nan SSIM: nan
<epoch: 13, iter: 400, lr:4.500e-04, t:1.71e+00, td:1.56e-02, eta:1.41e+01, nll:-1.342e+01>
22-05-17 22:10:20.482 - INFO: # Validation # PSNR: 1.9680e+01 SSIM: nan
22-05-17 22:10:20.483 - INFO: <epoch: 13, iter: 400> psnr: 1.9680e+01 SSIM: nan
22-05-17 22:10:20.483 - INFO: Saving best models
<epoch: 19, iter: 600, lr:4.500e-04, t:1.70e+00, td:1.37e-02, eta:1.39e+01, nll:-1.219e+01>
22-05-17 22:16:00.804 - INFO: # Validation # PSNR: 1.8884e+01 SSIM: nan
22-05-17 22:16:00.805 - INFO: <epoch: 19, iter: 600> psnr: 1.8884e+01 SSIM: nan
<epoch: 26, iter: 800, lr:4.500e-04, t:1.70e+00, td:1.55e-02, eta:1.38e+01, nll:-1.317e+01>
22-05-17 22:21:41.478 - INFO: # Validation # PSNR: 2.0391e+01 SSIM: 7.2959e-01
22-05-17 22:21:41.479 - INFO: <epoch: 26, iter: 800> psnr: 2.0391e+01 SSIM: 7.2959e-01
22-05-17 22:21:41.479 - INFO: Saving best models
<epoch: 33, iter: 1,000, lr:4.500e-04, t:1.70e+00, td:1.54e-02, eta:1.37e+01, nll:-1.284e+01>
22-05-17 22:27:22.198 - INFO: # Validation # PSNR: 2.1104e+01 SSIM: 7.4706e-01
22-05-17 22:27:22.199 - INFO: <epoch: 33, iter: 1,000> psnr: 2.1104e+01 SSIM: 7.4706e-01
22-05-17 22:27:22.199 - INFO: Saving models and training states.
22-05-17 22:27:22.785 - INFO: Saving best models
<epoch: 39, iter: 1,200, lr:4.500e-04, t:1.71e+00, td:1.36e-02, eta:1.37e+01, nll:-1.376e+01>
22-05-17 22:33:03.967 - INFO: # Validation # PSNR: 1.9835e+01 SSIM: 7.1762e-01
22-05-17 22:33:03.967 - INFO: <epoch: 39, iter: 1,200> psnr: 1.9835e+01 SSIM: 7.1762e-01
<epoch: 46, iter: 1,400, lr:4.500e-04, t:1.70e+00, td:1.56e-02, eta:1.35e+01, nll:-1.423e+01>
22-05-17 22:38:44.834 - INFO: # Validation # PSNR: 1.7979e+01 SSIM: 6.7875e-01
22-05-17 22:38:44.834 - INFO: <epoch: 46, iter: 1,400> psnr: 1.7979e+01 SSIM: 6.7875e-01
<epoch: 53, iter: 1,600, lr:4.500e-04, t:1.71e+00, td:1.58e-02, eta:1.35e+01, nll:-1.479e+01>
22-05-17 22:44:26.174 - INFO: # Validation # PSNR: 1.9058e+01 SSIM: 6.8164e-01
22-05-17 22:44:26.174 - INFO: <epoch: 53, iter: 1,600> psnr: 1.9058e+01 SSIM: 6.8164e-01
<epoch: 59, iter: 1,800, lr:4.500e-04, t:1.70e+00, td:1.36e-02, eta:1.33e+01, nll:-1.263e+01>
22-05-17 22:50:06.876 - INFO: # Validation # PSNR: 2.1273e+01 SSIM: 7.5831e-01
22-05-17 22:50:06.876 - INFO: <epoch: 59, iter: 1,800> psnr: 2.1273e+01 SSIM: 7.5831e-01
22-05-17 22:50:06.876 - INFO: Saving best models
<epoch: 66, iter: 2,000, lr:4.500e-04, t:1.71e+00, td:1.55e-02, eta:1.33e+01, nll:-1.427e+01>
22-05-17 22:55:48.322 - INFO: # Validation # PSNR: 2.1426e+01 SSIM: 7.4877e-01
22-05-17 22:55:48.322 - INFO: <epoch: 66, iter: 2,000> psnr: 2.1426e+01 SSIM: 7.4877e-01
22-05-17 22:55:48.322 - INFO: Saving models and training states.
22-05-17 22:55:48.860 - INFO: Saving best models
<epoch: 73, iter: 2,200, lr:4.500e-04, t:1.71e+00, td:1.54e-02, eta:1.32e+01, nll:-1.477e+01>
22-05-17 23:01:30.147 - INFO: # Validation # PSNR: 2.1490e+01 SSIM: 7.6223e-01
22-05-17 23:01:30.148 - INFO: <epoch: 73, iter: 2,200> psnr: 2.1490e+01 SSIM: 7.6223e-01
22-05-17 23:01:30.148 - INFO: Saving best models
<epoch: 79, iter: 2,400, lr:4.500e-04, t:1.71e+00, td:1.37e-02, eta:1.31e+01, nll:-1.484e+01>
22-05-17 23:07:11.724 - INFO: # Validation # PSNR: 1.8654e+01 SSIM: 6.7717e-01
22-05-17 23:07:11.725 - INFO: <epoch: 79, iter: 2,400> psnr: 1.8654e+01 SSIM: 6.7717e-01
<epoch: 86, iter: 2,600, lr:4.500e-04, t:1.70e+00, td:1.56e-02, eta:1.30e+01, nll:-1.458e+01>
22-05-17 23:12:52.514 - INFO: # Validation # PSNR: 2.1183e+01 SSIM: 7.5730e-01
22-05-17 23:12:52.514 - INFO: <epoch: 86, iter: 2,600> psnr: 2.1183e+01 SSIM: 7.5730e-01
<epoch: 93, iter: 2,800, lr:4.500e-04, t:1.70e+00, td:1.57e-02, eta:1.29e+01, nll:-1.419e+01>
22-05-17 23:18:33.419 - INFO: # Validation # PSNR: 2.1128e+01 SSIM: 7.5815e-01
22-05-17 23:18:33.420 - INFO: <epoch: 93, iter: 2,800> psnr: 2.1128e+01 SSIM: 7.5815e-01
<epoch: 99, iter: 3,000, lr:4.500e-04, t:1.70e+00, td:1.37e-02, eta:1.28e+01, nll:-1.454e+01>
22-05-17 23:24:13.746 - INFO: # Validation # PSNR: 2.1340e+01 SSIM: 7.6262e-01
22-05-17 23:24:13.747 - INFO: <epoch: 99, iter: 3,000> psnr: 2.1340e+01 SSIM: 7.6262e-01
22-05-17 23:24:13.747 - INFO: Saving models and training states.
<epoch:106, iter: 3,200, lr:4.500e-04, t:1.71e+00, td:1.55e-02, eta:1.27e+01, nll:-1.447e+01>
22-05-17 23:29:56.327 - INFO: # Validation # PSNR: 2.2221e+01 SSIM: 7.6925e-01
22-05-17 23:29:56.327 - INFO: <epoch:106, iter: 3,200> psnr: 2.2221e+01 SSIM: 7.6925e-01
22-05-17 23:29:56.327 - INFO: Saving best models
<epoch:113, iter: 3,400, lr:4.500e-04, t:1.71e+00, td:1.58e-02, eta:1.27e+01, nll:-1.527e+01>
22-05-17 23:35:39.072 - INFO: # Validation # PSNR: 2.0904e+01 SSIM: 7.5687e-01
22-05-17 23:35:39.073 - INFO: <epoch:113, iter: 3,400> psnr: 2.0904e+01 SSIM: 7.5687e-01
<epoch:119, iter: 3,600, lr:4.500e-04, t:1.71e+00, td:1.37e-02, eta:1.25e+01, nll:-1.365e+01>
22-05-17 23:41:20.888 - INFO: # Validation # PSNR: 2.0487e+01 SSIM: 7.4803e-01
22-05-17 23:41:20.888 - INFO: <epoch:119, iter: 3,600> psnr: 2.0487e+01 SSIM: 7.4803e-01
<epoch:126, iter: 3,800, lr:4.500e-04, t:1.69e+00, td:1.56e-02, eta:1.23e+01, nll:6.914e+00>
22-05-17 23:46:58.775 - INFO: # Validation # PSNR: nan SSIM: nan
22-05-17 23:46:58.775 - INFO: <epoch:126, iter: 3,800> psnr: nan SSIM: nan
<epoch:133, iter: 4,000, lr:4.500e-04, t:1.65e+00, td:1.56e-02, eta:1.19e+01, nll:nan>
22-05-17 23:52:29.097 - INFO: # Validation # PSNR: nan SSIM: nan
22-05-17 23:52:29.098 - INFO: <epoch:133, iter: 4,000> psnr: nan SSIM: nan
22-05-17 23:52:29.098 - INFO: Saving models and training states.

Intel MKL ERROR: Parameter 4 was incorrect on entry to SLASCL.

Intel MKL ERROR: Parameter 4 was incorrect on entry to SLASCL.
Traceback (most recent call last):
File "train.py", line 343, in
main()
File "train.py", line 191, in main
nll = model.optimize_parameters(current_step)
File "/home/jaemin/Desktop/LLFlow-main/code/models/LLFlow_model.py", line 208, in optimize_parameters
self.scaler.scale(total_loss).backward()
File "/home/jaemin/anaconda3/envs/mymodel/lib/python3.7/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/jaemin/anaconda3/envs/mymodel/lib/python3.7/site-packages/torch/autograd/init.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 22)
`

Do you have any solution for this problem?
Thanks.

自制数据集 nll PSNR SSIM出现nan值

您好,我最近用自制训练集进行训练,在114epoch nll出现异常
log信息如下:
23-12-29 14:24:01.086 - INFO: <epoch:112, iter: 11,400> psnr: 2.1525e+01 SSIM: 6.2814e-01
23-12-29 14:31:52.237 - INFO: <epoch:114, iter: 11,600> psnr: nan SSIM: nan

我参考之前类似的问题,重新训练了好几次都是在120epoch左右出现这个异常,请问有什么其它的办法吗?

Cost

Hello, I would like to know about the device and image size information you use to inference time

requirements.txt

hello,where is the requirements.txt?could you provide it?

How to calculate FLOPs and #Params for LLFlow?

Hello, authors! I am using the following function to calculate the FLOPs, #Params, and the inference time, and this function works for methods like ZeroDCE, RUAS, and URetinexNet.

from thop import profile
import torch
import time
def cal_eff_score(model, count = 100, use_cuda=True):

    # define input tensor
    inp_tensor = torch.rand(1, 3, 1080, 1920) 

    # deploy to cuda
    if use_cuda:
        inp_tensor = inp_tensor.cuda()
        model = model.cuda()

    # get flops and params
    flops, params = profile(model, inputs=(inp_tensor, ))
    G_flops = flops * 1e-9
    M_params = params * 1e-6

    # get time
    start_time = time.time()
    for i in range(count):
        _ = model(inp_tensor)
    used_time = time.time() - start_time
    ave_time = used_time / count

    # print score
    print('FLOPs (G) = {:.4f}'.format(G_flops))
    print('Params (M) = {:.4f}'.format(M_params))
    print('Time (S) = {:.4f}'.format(ave_time))

However, if I pass you model as the variable, it gives me the following error.

Traceback (most recent call last):
  File "test_unpaired.py", line 184, in <module>
    main()
  File "test_unpaired.py", line 135, in main
    cal_eff_score(model)
  File "test_unpaired.py", line 32, in cal_eff_score
    flops, params = profile(model, inputs=(inp_tensor, ))
  File "/root/miniconda3/lib/python3.8/site-packages/thop/profile.py", line 92, in profile
    model(*inputs)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/cuda/amp/autocast_mode.py", line 141, in decorate_autocast
    return func(*args, **kwargs)
  File "/root/autodl-tmp/Model/LLFlow/code/models/modules/LLFlow_arch.py", line 97, in forward
    return self.normal_flow(gt, lr, epses=epses, lr_enc=lr_enc, add_gt_noise=add_gt_noise, step=step,
  File "/root/autodl-tmp/Model/LLFlow/code/models/modules/LLFlow_arch.py", line 121, in normal_flow
    lr_enc = self.rrdbPreprocessing(lr)
  File "/root/autodl-tmp/Model/LLFlow/code/models/modules/LLFlow_arch.py", line 182, in rrdbPreprocessing
    rrdbResults = self.RRDB(lr, get_steps=True)
  File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/autodl-tmp/Model/LLFlow/code/models/modules/ConditionEncoder.py", line 96, in forward
    raw_low_input = x[:, 0:3].exp()
TypeError: 'NoneType' object is not subscriptable

I have spent a lot of time trying to understand the code in your models folder, but it is to complicated for me to understand. Therefore, I hope you can clarify how I can calculate the FLOPs and #Params for your model. Thanks!

Checkpoints

Hello, first of all thank you for your contributions in this area, you have done an excellent work.
I am kind of a new to image enhancement and deep learning areas so I want to ask where can we find checkpoints and especially the model path for testing.
Looking forward to your answer!

train and test question

question
question-1
There are several quetions I want to ask. I don't know why the training is wrong after I added the pre-trained model that you provided,is it not possible to use pre-trained models when training?By the way,I change the batch_size of training(16->8),because my "cuda out of memory".

And I trained once without pre-trained model,I get the latest model,and use it for my test work,but I don't know why the loaded model was downloaded instead of written by me,is that normally?
question2

Looking forward to your reply!Thank you!

ZeroDivisionError: division by zero

hello,you work is pretty good! But I meet some troubles when i train the model by my own dataset. It is shown like this
<epoch: 1, iter: 200, lr:4.500e-04, t:1.77e+00, td:1.01e-02, eta:1.46e+01, nll:-1.825e+01> Traceback (most recent call last): File "train.py", line 345, in <module> main() File "train.py", line 301, in main avg_psnr = avg_psnr / idx ZeroDivisionError: float division by zero
As you can see, the training shows incorrect t, td, ETA, and NLL. I guess maybe idX and NLL can't count correctly, but I don't know how to fix it. Have you ever encountered this problem? I really hope you can answer !thank you

Noise map

How to save the noise map of every epoch?

finetune the overall brightness

Hi,in code/test.py line 148, you use the mean_gt to fine-tune the overall brightness.

Is it fair to compare with other methods? MIRNet does not fine-tune the brightness.

VELOL指标计算

作者您好,我想请问一下,在VELOL数据集中分为了Real和Synthetic两类,你们的指标计算是分别计算了两类数据的指标然后取平均吗?我现在不太清楚这个指标是如何计算得出的,麻烦您了

pre-trained models

hello , I want to ask where to download the pre-trained models , I can't find it.thank you!

Docker

Hi,
Thank you for the interesting solution.
The light weight model may event be fit for web (tfjs).

Can you please provide a docker file to run the test on own images?

In my simple test with pytorch 1.10.0, I had to install:

pip install natsort
pip install cv2
pip install opencv-python-headless
pip install scikit-image
pip install lpips
pip install pandas

Still I got an error:

Traceback (most recent call last):
  File "test.py", line 185, in <module>
    main()
  File "test.py", line 86, in main
    model, opt = load_model(conf_path)
  File "test.py", line 29, in load_model
    model.load_network(load_path=model_path, network=model.netG)
  File "/data/code/models/base_model.py", line 112, in load_network
    network.load_state_dict(load_net_clean, strict=strict)
  File "/opt/bitnami/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1483, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for LLFlow:
        Missing key(s) in state_dict: "RRDB.RRDB_trunk.4.RDB1.conv1.weight",...

I guess that it is a wrong pytorch version, but don't have time to play with it.

validation error

I trained this model in LOL, the validation PSNR and SSIM are normal at first, but then dropped suddenly. I only modified the batch size=8, why does this happen?
Snipaste_2022-04-14_10-32-54

About loss

Hi, I have a question, how do I use L1 loss and maximum likelihood loss at the same time? When I used both together, I encountered the following error, and the resulting image would also be black. I am looking forward to your reply. Thank you!
image

Strange noise

Thanks for your work, but why is there a strange noise in the processed image?
Before:
image

After:
image

网络问题

请问,如何直接只使用您的可逆网络训练呢,不适用编码器?

onnx support

I wonder if the excellent model is able to converted into onnx ?

Over compensated brightness

Hello, first of all, great work!

I was testing the model and I found that some output images look very weird. (see below)

I am using the LOLv2.pth model

Before
wedding1

After
res

=====
This image work on LOLv2.pth model however it doesn't work on LOL.pth (as below)
Before
daniel

After
res

Any idea how to resolve this?

Thank you

nan error when i trained this mode on other field

Hi, Dr. Wang and Prof. Wan. I am a student working on AI for science. I have retrained your LLFlow model for the task of embryonic cell segmentation in the Caenorhabditis elegans, applying it to enhance the boundaries and denoise the raw images captured by the confocal microscope. My images and ground truths are all grayscale images with dimensions (256, 356). Here's a demo:
(1) One of my raw images:
WechatIMG47
(2) Paired ground truth:
WechatIMG48
After reshaping my images to (256, 256, 3), I trained the model. However, I encountered a problem during the training process. I randomly encounter this error:
(3) training.log screenshot:
WechatIMG49
"train.py:291: RuntimeWarning: divide by zero encountered in float_scalars
cropped_sr_img_adjust = np.clip(cropped_sr_img * (mean_gray_gt / mean_gray_out), 0, 1)
train.py:291: RuntimeWarning: invalid value encountered in multiply
cropped_sr_img_adjust = np.clip(cropped_sr_img * (mean_gray_gt / mean_gray_out), 0, 1)"
This error leads to all results of subsequent epochs being 'nan'. The appearance of this error is unpredictable - it sometimes occurs at the beginning, and sometimes in the latter part.
Although my best model can be saved and used normally, I would still like to ask for your help with this issue. Thank you.

Inference is too slow

My computing platform is A100, using your Smallnet.pth, but the network inference time is 13s?

Questions about the paper

Hello, I'd like to ask you two questions. The first: why should the input low light image be enhanced by histogram first? Can the low light image be directly sent to the network? Second: you said that you can learn one-to-many using normalizing flow. Where is it?

Result mismatch

Hello, I clone the code and test it on LOL dataset (eval15). I only modify the paths to dataset and pre-trained model in confs/LOL-pc.yml and run

python test.py --opt your_config_path

I got following results (PSNR 25.00) which is slightly different with the results in paper (PSNR 25.19)
截屏2022-08-22 23 58 09
I would like to know what causes the problem.

About dataset

Hi, I found that you used the VE-LOL dataset, however, the visual results of VE-LOL in Figure 7 seem to be the LOLv2 dataset.

If you train in LOLv1 and apply Cross-Dataset Evaluation in LOLv2, I think it may not very appropriate, since training images of LOLv1 contain part of testing images of LOLv2.

Another question. I have not found the downloading link of VE-LOL in your cited paper, can you provide an address?

About formula (8) in paper

Hi, thanks for your great works. After reading the paper, I cannot figure out the meaning of 'x' in formula (8). And how to understand the PDF function fz, which not use variable z in formula (8)? I would appreciate it if someone could help me.

公式8

关于采样

你好,作者! 我想请问一下你的采样过程中,你在文章中说“To generate a normally exposed image using a low-light
image, the low-light image is first passed through the encoder to extract the color map g(xl) and then the latent features
of the encoder are used as the condition for the invertible network. For the sampling strategy of z, one can randomly
select a batch of z from the distribution N (g(xL), 1) to get different outputs and then calculate the mean of generated
normally-exposed images to achieve better performance. To speed up the inference, we directly select g(xl) as the input
z and we empirically find that it can achieve a good enough result. So for all the experiments, we just use the mean value
g(xl) as the latent feature z for the conditional normalizing flow if not specified.” 但是我在代码中没有找到使用g(x_l) 均值的部分,我只看到您是直接约束了正常图像colo_map 和编码器的生成结果g(x_l),请教一下随机性表现在哪里呢。

About the "Split" module

Thank you for your code, I observed that the "split" module is used in the flow step of SRFlow, but not used in LLFlow.
Is it because the module affects the performance of LLFlow?

I am looking forward to your reply.

Can not test the model

Hi authors,
I try to run the code to evaluate the model on the LOL dataset. I follow your command "python test.py --opt /media/vipl2/DATA_2/Min/LLFlow-main/code/confs/LOL_smallNet.yml ". However, I can not get the results. The terminal show as:
image
And, I already changed the link of the dataset inside the LOL_smallNet.yml file as:
image
Could you please show me how to fix and obtain the results?
Thank you before!

训练时候出现NAN的情况

22-06-02 16:46:36.522 - INFO: <epoch: 40, iter: 1,740> psnr: 2.2714e+01 SSIM: 7.9226e-01
22-06-02 16:47:51.099 - INFO: # Validation # PSNR: 2.2015e+01 SSIM: 7.8545e-01
22-06-02 16:47:51.099 - INFO: <epoch: 40, iter: 1,760> psnr: 2.2015e+01 SSIM: 7.8545e-01
22-06-02 16:49:08.725 - INFO: # Validation # PSNR: 2.2413e+01 SSIM: 7.8182e-01
22-06-02 16:49:08.726 - INFO: <epoch: 41, iter: 1,780> psnr: 2.2413e+01 SSIM: 7.8182e-01
<epoch: 41, iter: 1,800, lr:1.800e-04, t:3.79e+00, td:4.99e-02, eta:4.03e+01, nll:-1.410e+01>
22-06-02 16:50:23.710 - INFO: # Validation # PSNR: 2.3476e+01 SSIM: 7.7006e-01
22-06-02 16:50:23.711 - INFO: <epoch: 41, iter: 1,800> psnr: 2.3476e+01 SSIM: 7.7006e-01
22-06-02 16:51:41.445 - INFO: # Validation # PSNR: 2.3458e+01 SSIM: 7.7498e-01
22-06-02 16:51:41.446 - INFO: <epoch: 42, iter: 1,820> psnr: 2.3458e+01 SSIM: 7.7498e-01
22-06-02 16:52:56.995 - INFO: # Validation # PSNR: 2.3248e+01 SSIM: 7.9559e-01
22-06-02 16:52:56.995 - INFO: <epoch: 42, iter: 1,840> psnr: 2.3248e+01 SSIM: 7.9559e-01
22-06-02 16:54:14.513 - INFO: # Validation # PSNR: 2.1923e+01 SSIM: 7.8828e-01
22-06-02 16:54:14.513 - INFO: <epoch: 43, iter: 1,860> psnr: 2.1923e+01 SSIM: 7.8828e-01
22-06-02 16:55:29.155 - INFO: # Validation # PSNR: 2.3526e+01 SSIM: 7.2573e-01
22-06-02 16:55:29.155 - INFO: <epoch: 43, iter: 1,880> psnr: 2.3526e+01 SSIM: 7.2573e-01
<epoch: 44, iter: 1,900, lr:1.900e-04, t:3.83e+00, td:7.98e-02, eta:4.06e+01, nll:-1.437e+01>
22-06-02 16:56:46.834 - INFO: # Validation # PSNR: 2.3193e+01 SSIM: 7.6726e-01
22-06-02 16:56:46.834 - INFO: <epoch: 44, iter: 1,900> psnr: 2.3193e+01 SSIM: 7.6726e-01
22-06-02 16:58:01.565 - INFO: # Validation # PSNR: 2.1788e+01 SSIM: 7.8349e-01
22-06-02 16:58:01.566 - INFO: <epoch: 44, iter: 1,920> psnr: 2.1788e+01 SSIM: 7.8349e-01
22-06-02 16:59:18.780 - INFO: # Validation # PSNR: 2.3761e+01 SSIM: 7.9732e-01
22-06-02 16:59:18.780 - INFO: <epoch: 45, iter: 1,940> psnr: 2.3761e+01 SSIM: 7.9732e-01
22-06-02 17:00:33.713 - INFO: # Validation # PSNR: 2.3915e+01 SSIM: 7.9458e-01
22-06-02 17:00:33.713 - INFO: <epoch: 45, iter: 1,960> psnr: 2.3915e+01 SSIM: 7.9458e-01
22-06-02 17:01:51.351 - INFO: # Validation # PSNR: 2.3342e+01 SSIM: 7.6980e-01
22-06-02 17:01:51.351 - INFO: <epoch: 46, iter: 1,980> psnr: 2.3342e+01 SSIM: 7.6980e-01
<epoch: 46, iter: 2,000, lr:2.000e-04, t:3.79e+00, td:5.20e-02, eta:4.00e+01, nll:-1.445e+01>
22-06-02 17:03:06.338 - INFO: # Validation # PSNR: 2.3837e+01 SSIM: 7.9719e-01
22-06-02 17:03:06.338 - INFO: <epoch: 46, iter: 2,000> psnr: 2.3837e+01 SSIM: 7.9719e-01
22-06-02 17:04:20.938 - INFO: # Validation # PSNR: 2.4228e+01 SSIM: 7.9535e-01
22-06-02 17:04:20.938 - INFO: <epoch: 46, iter: 2,020> psnr: 2.4228e+01 SSIM: 7.9535e-01
22-06-02 17:04:20.938 - INFO: Saving best models
22-06-02 17:05:39.073 - INFO: # Validation # PSNR: 2.4125e+01 SSIM: 7.7098e-01
22-06-02 17:05:39.073 - INFO: <epoch: 47, iter: 2,040> psnr: 2.4125e+01 SSIM: 7.7098e-01
22-06-02 17:06:53.739 - INFO: # Validation # PSNR: 2.4366e+01 SSIM: 7.8451e-01
22-06-02 17:06:53.740 - INFO: <epoch: 47, iter: 2,060> psnr: 2.4366e+01 SSIM: 7.8451e-01
22-06-02 17:06:53.740 - INFO: Saving best models
22-06-02 17:08:12.200 - INFO: # Validation # PSNR: 2.3133e+01 SSIM: 8.0574e-01
22-06-02 17:08:12.200 - INFO: <epoch: 48, iter: 2,080> psnr: 2.3133e+01 SSIM: 8.0574e-01
<epoch: 48, iter: 2,100, lr:2.100e-04, t:3.81e+00, td:5.07e-02, eta:4.01e+01, nll:-1.470e+01>
22-06-02 17:09:27.285 - INFO: # Validation # PSNR: 2.3492e+01 SSIM: 7.8315e-01
22-06-02 17:09:27.285 - INFO: <epoch: 48, iter: 2,100> psnr: 2.3492e+01 SSIM: 7.8315e-01
22-06-02 17:10:45.385 - INFO: # Validation # PSNR: 2.3993e+01 SSIM: 7.7895e-01
22-06-02 17:10:45.385 - INFO: <epoch: 49, iter: 2,120> psnr: 2.3993e+01 SSIM: 7.7895e-01
22-06-02 17:12:00.666 - INFO: # Validation # PSNR: 2.3472e+01 SSIM: 7.9519e-01
22-06-02 17:12:00.666 - INFO: <epoch: 49, iter: 2,140> psnr: 2.3472e+01 SSIM: 7.9519e-01
22-06-02 17:13:17.630 - INFO: # Validation # PSNR: 2.4151e+01 SSIM: 7.9518e-01
22-06-02 17:13:17.630 - INFO: <epoch: 50, iter: 2,160> psnr: 2.4151e+01 SSIM: 7.9518e-01
22-06-02 17:14:32.791 - INFO: # Validation # PSNR: 2.3569e+01 SSIM: 7.8216e-01
22-06-02 17:14:32.791 - INFO: <epoch: 50, iter: 2,180> psnr: 2.3569e+01 SSIM: 7.8216e-01
<epoch: 51, iter: 2,200, lr:2.200e-04, t:3.83e+00, td:7.13e-02, eta:4.02e+01, nll:-1.421e+01>
22-06-02 17:15:50.171 - INFO: # Validation # PSNR: 2.3831e+01 SSIM: 8.0239e-01
22-06-02 17:15:50.172 - INFO: <epoch: 51, iter: 2,200> psnr: 2.3831e+01 SSIM: 8.0239e-01
22-06-02 17:17:05.023 - INFO: # Validation # PSNR: 2.3857e+01 SSIM: 7.8366e-01
22-06-02 17:17:05.023 - INFO: <epoch: 51, iter: 2,220> psnr: 2.3857e+01 SSIM: 7.8366e-01
22-06-02 17:18:22.693 - INFO: # Validation # PSNR: 2.4156e+01 SSIM: 7.9386e-01
22-06-02 17:18:22.694 - INFO: <epoch: 52, iter: 2,240> psnr: 2.4156e+01 SSIM: 7.9386e-01
22-06-02 17:19:37.641 - INFO: # Validation # PSNR: 2.4247e+01 SSIM: 8.1077e-01
22-06-02 17:19:37.641 - INFO: <epoch: 52, iter: 2,260> psnr: 2.4247e+01 SSIM: 8.1077e-01
22-06-02 17:20:55.194 - INFO: # Validation # PSNR: 2.4300e+01 SSIM: 7.9351e-01
22-06-02 17:20:55.194 - INFO: <epoch: 53, iter: 2,280> psnr: 2.4300e+01 SSIM: 7.9351e-01
<epoch: 53, iter: 2,300, lr:2.300e-04, t:3.80e+00, td:5.24e-02, eta:3.98e+01, nll:-1.458e+01>
22-06-02 17:22:10.256 - INFO: # Validation # PSNR: 2.3832e+01 SSIM: 7.3875e-01
22-06-02 17:22:10.256 - INFO: <epoch: 53, iter: 2,300> psnr: 2.3832e+01 SSIM: 7.3875e-01
22-06-02 17:23:25.127 - INFO: # Validation # PSNR: 2.4496e+01 SSIM: 8.0288e-01
22-06-02 17:23:25.128 - INFO: <epoch: 53, iter: 2,320> psnr: 2.4496e+01 SSIM: 8.0288e-01
22-06-02 17:23:25.128 - INFO: Saving best models
22-06-02 17:24:44.229 - INFO: # Validation # PSNR: 2.2438e+01 SSIM: 7.8931e-01
22-06-02 17:24:44.229 - INFO: <epoch: 54, iter: 2,340> psnr: 2.2438e+01 SSIM: 7.8931e-01
22-06-02 17:26:00.054 - INFO: # Validation # PSNR: 2.4096e+01 SSIM: 8.0219e-01
22-06-02 17:26:00.054 - INFO: <epoch: 54, iter: 2,360> psnr: 2.4096e+01 SSIM: 8.0219e-01
22-06-02 17:27:17.756 - INFO: # Validation # PSNR: 2.4343e+01 SSIM: 7.8690e-01
22-06-02 17:27:17.756 - INFO: <epoch: 55, iter: 2,380> psnr: 2.4343e+01 SSIM: 7.8690e-01
<epoch: 55, iter: 2,400, lr:2.400e-04, t:3.82e+00, td:5.53e-02, eta:3.99e+01, nll:-1.636e+01>
22-06-02 17:28:32.438 - INFO: # Validation # PSNR: 2.4344e+01 SSIM: 7.7306e-01
22-06-02 17:28:32.439 - INFO: <epoch: 55, iter: 2,400> psnr: 2.4344e+01 SSIM: 7.7306e-01
22-06-02 17:29:49.812 - INFO: # Validation # PSNR: 2.4630e+01 SSIM: 7.8660e-01
22-06-02 17:29:49.812 - INFO: <epoch: 56, iter: 2,420> psnr: 2.4630e+01 SSIM: 7.8660e-01
22-06-02 17:29:49.812 - INFO: Saving best models
22-06-02 17:31:05.616 - INFO: # Validation # PSNR: 2.4031e+01 SSIM: 7.9935e-01
22-06-02 17:31:05.616 - INFO: <epoch: 56, iter: 2,440> psnr: 2.4031e+01 SSIM: 7.9935e-01
22-06-02 17:32:22.724 - INFO: # Validation # PSNR: 2.4581e+01 SSIM: 8.0658e-01
22-06-02 17:32:22.724 - INFO: <epoch: 57, iter: 2,460> psnr: 2.4581e+01 SSIM: 8.0658e-01
22-06-02 17:33:37.579 - INFO: # Validation # PSNR: 2.4193e+01 SSIM: 7.9596e-01
22-06-02 17:33:37.579 - INFO: <epoch: 57, iter: 2,480> psnr: 2.4193e+01 SSIM: 7.9596e-01
<epoch: 58, iter: 2,500, lr:2.500e-04, t:3.82e+00, td:7.41e-02, eta:3.98e+01, nll:-1.464e+01>
22-06-02 17:34:55.110 - INFO: # Validation # PSNR: 2.3714e+01 SSIM: 8.0299e-01
22-06-02 17:34:55.110 - INFO: <epoch: 58, iter: 2,500> psnr: 2.3714e+01 SSIM: 8.0299e-01
22-06-02 17:36:10.035 - INFO: # Validation # PSNR: 2.4123e+01 SSIM: 7.9859e-01
22-06-02 17:36:10.035 - INFO: <epoch: 58, iter: 2,520> psnr: 2.4123e+01 SSIM: 7.9859e-01
22-06-02 17:37:27.051 - INFO: # Validation # PSNR: 2.3793e+01 SSIM: 7.8510e-01
22-06-02 17:37:27.051 - INFO: <epoch: 59, iter: 2,540> psnr: 2.3793e+01 SSIM: 7.8510e-01
22-06-02 17:38:41.986 - INFO: # Validation # PSNR: 2.3959e+01 SSIM: 7.8839e-01
22-06-02 17:38:41.986 - INFO: <epoch: 59, iter: 2,560> psnr: 2.3959e+01 SSIM: 7.8839e-01
22-06-02 17:39:56.507 - INFO: # Validation # PSNR: 2.4116e+01 SSIM: 8.1688e-01
22-06-02 17:39:56.507 - INFO: <epoch: 59, iter: 2,580> psnr: 2.4116e+01 SSIM: 8.1688e-01
<epoch: 60, iter: 2,600, lr:2.600e-04, t:3.79e+00, td:5.16e-02, eta:3.94e+01, nll:-1.540e+01>
22-06-02 17:41:13.992 - INFO: # Validation # PSNR: 2.4500e+01 SSIM: 8.1474e-01
22-06-02 17:41:13.993 - INFO: <epoch: 60, iter: 2,600> psnr: 2.4500e+01 SSIM: 8.1474e-01
22-06-02 17:42:28.953 - INFO: # Validation # PSNR: 2.3704e+01 SSIM: 7.6469e-01
22-06-02 17:42:28.953 - INFO: <epoch: 60, iter: 2,620> psnr: 2.3704e+01 SSIM: 7.6469e-01
22-06-02 17:43:44.738 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:43:44.738 - INFO: <epoch: 61, iter: 2,640> psnr: nan SSIM: nan
22-06-02 17:44:57.484 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:44:57.485 - INFO: <epoch: 61, iter: 2,660> psnr: nan SSIM: nan
22-06-02 17:46:12.377 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:46:12.378 - INFO: <epoch: 62, iter: 2,680> psnr: nan SSIM: nan
<epoch: 62, iter: 2,700, lr:2.700e-04, t:3.71e+00, td:5.06e-02, eta:3.84e+01, nll:nan>
22-06-02 17:47:23.281 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:47:23.281 - INFO: <epoch: 62, iter: 2,700> psnr: nan SSIM: nan
22-06-02 17:48:36.760 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:48:36.760 - INFO: <epoch: 63, iter: 2,720> psnr: nan SSIM: nan
22-06-02 17:49:47.502 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:49:47.502 - INFO: <epoch: 63, iter: 2,740> psnr: nan SSIM: nan
22-06-02 17:51:00.415 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:51:00.416 - INFO: <epoch: 64, iter: 2,760> psnr: nan SSIM: nan
22-06-02 17:52:11.508 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:52:11.508 - INFO: <epoch: 64, iter: 2,780> psnr: nan SSIM: nan
<epoch: 65, iter: 2,800, lr:2.800e-04, t:3.62e+00, td:7.12e-02, eta:3.74e+01, nll:nan>
22-06-02 17:53:24.952 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:53:24.952 - INFO: <epoch: 65, iter: 2,800> psnr: nan SSIM: nan
22-06-02 17:54:36.266 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:54:36.266 - INFO: <epoch: 65, iter: 2,820> psnr: nan SSIM: nan
22-06-02 17:55:49.798 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:55:49.799 - INFO: <epoch: 66, iter: 2,840> psnr: nan SSIM: nan
22-06-02 17:57:00.782 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:57:00.782 - INFO: <epoch: 66, iter: 2,860> psnr: nan SSIM: nan
22-06-02 17:58:11.699 - INFO: # Validation # PSNR: nan SSIM: nan
22-06-02 17:58:11.700 - INFO: <epoch: 66, iter: 2,880> psnr: nan SSIM: nan
<epoch: 67, iter: 2,900, lr:2.900e-04, t:3.60e+00, td:4.90e-02, eta:3.71e+01, nll:nan>

把warmup_iter设置为5000仍会出现这样的情况,请问需要怎么解决呢?谢谢

Are the PSNR and SSIM results accurate?

This code exists during the test:

mean_gray_out = cv2.cvtColor(normal_img.astype(np.float32), cv2.COLOR_BGR2GRAY).mean()
mean_gray_gt = cv2.cvtColor(gt_img.astype(np.float32), cv2.COLOR_BGR2GRAY).mean()
cropped_sr_img_adjust = np.clip(cropped_sr_img * (mean_gray_gt / mean_gray_out), 0, 1)
util.save_img((cropped_sr_img_adjust * 255).astype(np.uint8), save_img_path)

If mean_gray_gt / mean_gray_out is not calculated, will PSNR and SSIM results be consistent with those in the paper?

psnr ssim is none

I download the code and modify the data path but got the psnr ssim is none

question about the 'finetune the overall brightness'

Hi, Thank you for your code. I used the provided pretrain model to test LOL. the average PSNR I measured is 25db, but I felt confused that when I removed the 'finetune the overall brightness', the result is 21.15. the value has a significant decline without finetuning the global brightness which uses the ground truth.

Dear author, thank you for taking the time to read this message. I am very interested in your masterpiece and have read it carefully, but when I read the code, I am not very clear about the structure of "Flow step". Can you provide me with a detailed structure diagram of "Flow step"? Many thanks! My email address: [email protected]

Dear author, thank you for taking the time to read this message. I am very interested in your masterpiece and have read it carefully (Low-Light Image Enhancement with Normalizing Flow.), but when I read the code, I am not very clear about the structure of "Flow step". Can you provide me with a detailed structure diagram of "Flow step"? Many thanks! My email address: [email protected]

训练阶段与测试阶段评价指标不一致

您好,在代码运行中发现以下情况:使用train.py训练的过程中的评价指标数值与其所保存权重使用test.py得到的评价指标具有较大偏差,检查代码发现在训练与测试过程中项目使用了两组不同的评价指标计算代码,且在训练阶段似乎是使用RGB图像进行评价而在测试阶段使用灰度图像进行评价(在自述文件中有提到)。请问这样的设计是否会对结果产生影响?并且在test.py中同样使用RGB图像进行评价后两阶段得到的评价指标数值仍然存在较小的偏差,这是否会对结果产生影响?

PSNR is difficult to reproduce

I have run the training code, but the PSNR of the test results is quite different from the paper results. The PSNR I measured on the LOL dataset is 23.93dB, while the PSNR in the paper is 25.19dB.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.