Giter Site home page Giter Site logo

caiyuanhao1998 / mst-plus-plus Goto Github PK

View Code? Open in Web Editor NEW
410.0 9.0 58.0 10.21 MB

"MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction" (CVPRW 2022) & (Winner of NTIRE 2022 Spectral Recovery Challenge) and a toolbox for spectral reconstruction

Home Page: https://arxiv.org/abs/2204.07908

License: MIT License

Python 80.91% MATLAB 19.09%
spectral-reconstruction hyperspectral-image-reconstruction image-restoration spectral-superresolution ntire

mst-plus-plus's Introduction

PWC PWC

PWC PWC

MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction (CVPRW 2022)

winner arXiv zhihu mst

Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Radu Timofte, Luc Van Gool

The first two authors contribute equally to this work

News

  • 2024.03.21 : Our methods Retinexformer and MST++ (NTIRE 2022 Spectral Reconstruction Challenge Winner) ranked top-2 in the NTIRE 2024 Challenge on Low Light Enhancement. Code, pre-trained models, training logs, and enhancement results will be released in the repo of Retinexformer. Stay tuned! 🚀
  • 2024.02.15 : NTIRE 2024 Challenge on Low Light Enhancement begins. Welcome to use our Retinexformer or MST++ (NTIRE 2022 Spectral Reconstruction Challenge Winner) to participate in this challenge! 🏆
  • 2023.11.02 : Our MST++ is added to the Awesome-Transformer-Attention collection. 💫
  • 2022.10.24 : We have provided Params and FLOPS evaluating function. Feel free to check and use them.
  • 2022.10.23 : We have provided some visualization tool functions. Please feel free to check and use them.
  • 2022.04.17 : Our paper has been accepted by CVPRW 2022, code and models have been released. 🚀
  • 2022.04.02 : We win the First place of NTIRE 2022 Challenge on Spectral Reconstruction from RGB. 🏆
480 nm 520 nm 580 nm 660 nm

Abstract: Existing leading methods for spectral reconstruction (SR) focus on designing deeper or wider convolutional neural networks (CNNs) to learn the end-to-end mapping from the RGB image to its hyperspectral image (HSI). These CNN-based methods achieve impressive restoration performance while showing limitations in capturing the long-range dependencies and self-similarity prior. To cope with this problem, we propose a novel Transformer-based method, Multi-stage Spectral-wise Transformer (MST++), for efficient spectral reconstruction. In particular, we employ Spectral-wise Multi-head Self-attention (S-MSA) that is based on the HSI spatially sparse while spectrally self-similar nature to compose the basic unit, Spectral-wise Attention Block (SAB). Then SABs build up Single-stage Spectral-wise Transformer (SST) that exploits a U-shaped structure to extract multi-resolution contextual information. Finally, our MST++, cascaded by several SSTs, progressively improves the reconstruction quality from coarse to fine. Comprehensive experiments show that our MST++ significantly outperforms other state-of-the-art methods. In the NTIRE 2022 Spectral Reconstruction Challenge, our approach won the First place.


Network Architecture

Illustration of MST

Our MST++ is mainly based on our work MST, which is accepted by CVPR 2022.

Comparison with State-of-the-art Methods

This repo is a baseline and toolbox containing 11 image restoration algorithms for Spectral Reconstruction.

We are going to enlarge our model zoo in the future.

Supported algorithms:

comparison_fig

Results on NTIRE 2022 HSI Dataset - Validation

Method Params (M) FLOPS (G) MRAE RMSE PSNR Model Zoo
HSCNN+ 4.65 304.45 0.3814 0.0588 26.36 Google Drive / Baidu Disk
HRNet 31.70 163.81 0.3476 0.0550 26.89 Google Drive / Baidu Disk
EDSR 2.42 158.32 0.3277 0.0437 28.29 Google Drive / Baidu Disk
AWAN 4.04 270.61 0.2500 0.0367 31.22 Google Drive / Baidu Disk
HDNet 2.66 173.81 0.2048 0.0317 32.13 Google Drive / Baidu Disk
HINet 5.21 31.04 0.2032 0.0303 32.51 Google Drive / Baidu Disk
MIRNet 3.75 42.95 0.1890 0.0274 33.29 Google Drive / Baidu Disk
Restormer 15.11 93.77 0.1833 0.0274 33.40 Google Drive / Baidu Disk
MPRNet 3.62 101.59 0.1817 0.0270 33.50 Google Drive / Baidu Disk
MST-L 2.45 32.07 0.1772 0.0256 33.90 Google Drive / Baidu Disk
MST++ 1.62 23.05 0.1645 0.0248 34.32 Google Drive / Baidu Disk

Our MST++ siginificantly outperforms other methods while requiring cheaper Params and FLOPS.

Note: access code for Baidu Disk is mst1.

1. Create Envirement:

  • Python 3 (Recommend to use Anaconda)

  • NVIDIA GPU + CUDA

  • Python packages:

    cd MST-plus-plus
    pip install -r requirements.txt

2. Data Preparation:

  • Download training spectral images (Google Drive / Baidu Disk, code: mst1), training RGB images (Google Drive / Baidu Disk), validation spectral images (Google Drive / Baidu Disk), validation RGB images (Google Drive / Baidu Disk), and testing RGB images (Google Drive / Baidu Disk) from the competition website of NTIRE 2022 Spectral Reconstruction Challenge.

  • Place the training spectral images and validation spectral images to /MST-plus-plus/dataset/Train_Spec/.

  • Place the training RGB images and validation RGB images to /MST-plus-plus/dataset/Train_RGB/.

  • Place the testing RGB images to /MST-plus-plus/dataset/Test_RGB/.

  • Then this repo is collected as the following form:

    |--MST-plus-plus
        |--test_challenge_code
        |--test_develop_code
        |--train_code  
        |--dataset 
            |--Train_Spec
                |--ARAD_1K_0001.mat
                |--ARAD_1K_0002.mat
                : 
                |--ARAD_1K_0950.mat
      	|--Train_RGB
                |--ARAD_1K_0001.jpg
                |--ARAD_1K_0002.jpg
                : 
                |--ARAD_1K_0950.jpg
            |--Test_RGB
                |--ARAD_1K_0951.jpg
                |--ARAD_1K_0952.jpg
                : 
                |--ARAD_1K_1000.jpg
            |--split_txt
                |--train_list.txt
                |--valid_list.txt

3. Evaluation on the Validation Set:

(1) Download the pretrained model zoo from (Google Drive / Baidu Disk, code: mst1) and place them to /MST-plus-plus/test_develop_code/model_zoo/.

(2) Run the following command to test the model on the validation RGB images.

cd /MST-plus-plus/test_develop_code/

# test MST++
python test.py --data_root ../dataset/  --method mst_plus_plus --pretrained_model_path ./model_zoo/mst_plus_plus.pth --outf ./exp/mst_plus_plus/  --gpu_id 0

# test MST-L
python test.py --data_root ../dataset/  --method mst --pretrained_model_path ./model_zoo/mst.pth --outf ./exp/mst/  --gpu_id 0

# test MIRNet
python test.py --data_root ../dataset/  --method mirnet --pretrained_model_path ./model_zoo/mirnet.pth --outf ./exp/mirnet/  --gpu_id 0

# test HINet
python test.py --data_root ../dataset/  --method hinet --pretrained_model_path ./model_zoo/hinet.pth --outf ./exp/hinet/  --gpu_id 0

# test MPRNet
python test.py --data_root ../dataset/  --method mprnet --pretrained_model_path ./model_zoo/mprnet.pth --outf ./exp/mprnet/  --gpu_id 0

# test Restormer
python test.py --data_root ../dataset/  --method restormer --pretrained_model_path ./model_zoo/restormer.pth --outf ./exp/restormer/  --gpu_id 0

# test EDSR
python test.py --data_root ../dataset/  --method edsr --pretrained_model_path ./model_zoo/edsr.pth --outf ./exp/edsr/  --gpu_id 0

# test HDNet
python test.py --data_root ../dataset/  --method hdnet --pretrained_model_path ./model_zoo/hdnet.pth --outf ./exp/hdnet/  --gpu_id 0

# test HRNet
python test.py --data_root ../dataset/  --method hrnet --pretrained_model_path ./model_zoo/hrnet.pth --outf ./exp/hrnet/  --gpu_id 0

# test HSCNN+
python test.py --data_root ../dataset/  --method hscnn_plus --pretrained_model_path ./model_zoo/hscnn_plus.pth --outf ./exp/hscnn_plus/  --gpu_id 0

# test AWAN
python test.py --data_root ../dataset/  --method awan --pretrained_model_path ./model_zoo/awan.pth --outf ./exp/awan/  --gpu_id 0

The results will be saved in /MST-plus-plus/test_develop_code/exp/ in the mat format and the evaluation metric (including MRAE,RMSE,PSNR) will be printed.

  • Evaluating the Params and FLOPS of models

We have provided a function my_summary() in test_develop_code/utils.py, please use this function to evaluate the parameters and computational complexity of the models, especially the Transformers as

from utils import my_summary
my_summary(MST_Plus_Plus(), 256, 256, 3, 1)

4. Evaluation on the Test Set:

(1) Download the pretrained model zoo from (Google Drive / Baidu Disk, code: mst1) and place them to /MST-plus-plus/test_challenge_code/model_zoo/.

(2) Run the following command to test the model on the testing RGB images.

cd /MST-plus-plus/test_challenge_code/

# test MST++
python test.py --data_root ../dataset/  --method mst_plus_plus --pretrained_model_path ./model_zoo/mst_plus_plus.pth --outf ./exp/mst_plus_plus/  --gpu_id 0

# test MST-L
python test.py --data_root ../dataset/  --method mst --pretrained_model_path ./model_zoo/mst.pth --outf ./exp/mst/  --gpu_id 0

# test MIRNet
python test.py --data_root ../dataset/  --method mirnet --pretrained_model_path ./model_zoo/mirnet.pth --outf ./exp/mirnet/  --gpu_id 0

# test HINet
python test.py --data_root ../dataset/  --method hinet --pretrained_model_path ./model_zoo/hinet.pth --outf ./exp/hinet/  --gpu_id 0

# test MPRNet
python test.py --data_root ../dataset/  --method mprnet --pretrained_model_path ./model_zoo/mprnet.pth --outf ./exp/mprnet/  --gpu_id 0

# test Restormer
python test.py --data_root ../dataset/  --method restormer --pretrained_model_path ./model_zoo/restormer.pth --outf ./exp/restormer/  --gpu_id 0

# test EDSR
python test.py --data_root ../dataset/  --method edsr --pretrained_model_path ./model_zoo/edsr.pth --outf ./exp/edsr/  --gpu_id 0

# test HDNet
python test.py --data_root ../dataset/  --method hdnet --pretrained_model_path ./model_zoo/hdnet.pth --outf ./exp/hdnet/  --gpu_id 0

# test HRNet
python test.py --data_root ../dataset/  --method hrnet --pretrained_model_path ./model_zoo/hrnet.pth --outf ./exp/hrnet/  --gpu_id 0

# test HSCNN+
python test.py --data_root ../dataset/  --method hscnn_plus --pretrained_model_path ./model_zoo/hscnn_plus.pth --outf ./exp/hscnn_plus/  --gpu_id 0

The results and submission.zip will be saved in /MST-plus-plus/test_challenge_code/exp/.

5. Training

To train a model, run

cd /MST-plus-plus/train_code/

# train MST++
python train.py --method mst_plus_plus  --batch_size 20 --end_epoch 300 --init_lr 4e-4 --outf ./exp/mst_plus_plus/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train MST-L
python train.py --method mst  --batch_size 20 --end_epoch 300 --init_lr 4e-4 --outf ./exp/mst/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train MIRNet
python train.py --method mirnet  --batch_size 20 --end_epoch 300 --init_lr 4e-4 --outf ./exp/mirnet/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train HINet
python train.py --method hinet  --batch_size 20 --end_epoch 300 --init_lr 2e-4 --outf ./exp/hinet/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train MPRNet
python train.py --method mprnet  --batch_size 20 --end_epoch 300 --init_lr 2e-4 --outf ./exp/mprnet/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train Restormer
python train.py --method restormer  --batch_size 20 --end_epoch 300 --init_lr 2e-4 --outf ./exp/restormer/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train EDSR
python train.py --method edsr  --batch_size 20 --end_epoch 300 --init_lr 1e-4 --outf ./exp/edsr/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train HDNet
python train.py --method hdnet  --batch_size 20 --end_epoch 300 --init_lr 4e-4 --outf ./exp/hdnet/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train HRNet
python train.py --method hrnet  --batch_size 20 --end_epoch 300 --init_lr 1e-4 --outf ./exp/hrnet/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train HSCNN+
python train.py --method hscnn_plus  --batch_size 20 --end_epoch 300 --init_lr 2e-4 --outf ./exp/hscnn_plus/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

# train AWAN
python train.py --method awan  --batch_size 20 --end_epoch 300 --init_lr 1e-4 --outf ./exp/awan/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

The training log and models will be saved in /MST-plus-plus/train_code/exp/.

6. Prediction

(1) Download the pretrained model zoo from (Google Drive / Baidu Disk, code: mst1) and place them to /MST-plus-plus/predict_code/model_zoo/.

(2) Run the following command to reconstruct your own RGB image.

cd /MST-plus-plus/predict_code/

# reconstruct by MST++
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg  --method mst_plus_plus --pretrained_model_path ./model_zoo/mst_plus_plus.pth --outf ./exp/mst_plus_plus/  --gpu_id 0

# reconstruct by MST-L
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg  --method mst --pretrained_model_path ./model_zoo/mst.pth --outf ./exp/mst/  --gpu_id 0

# reconstruct by MIRNet
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg  --method mirnet --pretrained_model_path ./model_zoo/mirnet.pth --outf ./exp/mirnet/  --gpu_id 0

# reconstruct by HINet
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg  --method hinet --pretrained_model_path ./model_zoo/hinet.pth --outf ./exp/hinet/  --gpu_id 0

# reconstruct by MPRNet
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg  --method mprnet --pretrained_model_path ./model_zoo/mprnet.pth --outf ./exp/mprnet/  --gpu_id 0

# reconstruct by Restormer
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg  --method restormer --pretrained_model_path ./model_zoo/restormer.pth --outf ./exp/restormer/  --gpu_id 0

# reconstruct by EDSR
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg --method edsr --pretrained_model_path ./model_zoo/edsr.pth --outf ./exp/edsr/  --gpu_id 0

# reconstruct by HDNet
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg  --method hdnet --pretrained_model_path ./model_zoo/hdnet.pth --outf ./exp/hdnet/  --gpu_id 0

# reconstruct by HRNet
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg  --method hrnet --pretrained_model_path ./model_zoo/hrnet.pth --outf ./exp/hrnet/  --gpu_id 0

# reconstruct by HSCNN+
python test.py --rgb_path ./demo/ARAD_1K_0912.jpg  --method hscnn_plus --pretrained_model_path ./model_zoo/hscnn_plus.pth --outf ./exp/hscnn_plus/  --gpu_id 0

You can replace './demo/ARAD_1K_0912.jpg' with your RGB image path. The reconstructed results will be saved in /MST-plus-plus/predict_code/exp/.

7. Visualization

  • Put the reconstruted HSI in visualization/simulation_results/results/.

  • Generate the RGB images of the reconstructed HSIs

cd visualization/
Run show_simulation.m

Citation

If this repo helps you, please consider citing our works:

# MST
@inproceedings{mst,
  title={Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image Reconstruction},
  author={Yuanhao Cai and Jing Lin and Xiaowan Hu and Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},
  booktitle={CVPR},
  year={2022}
}


# MST++
@inproceedings{mst_pp,
  title={MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction},
  author={Yuanhao Cai and Jing Lin and Zudi Lin and Haoqian Wang and Yulun Zhang and Hanspeter Pfister and Radu Timofte and Luc Van Gool},
  booktitle={CVPRW},
  year={2022}
}


# HDNet
@inproceedings{hdnet,
  title={HDNet: High-resolution Dual-domain Learning for Spectral Compressive Imaging},
  author={Xiaowan Hu and Yuanhao Cai and Jing Lin and  Haoqian Wang and Xin Yuan and Yulun Zhang and Radu Timofte and Luc Van Gool},
  booktitle={CVPR},
  year={2022}
}

mst-plus-plus's People

Contributors

caiyuanhao1998 avatar linjing7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mst-plus-plus's Issues

How .mat to .jpg

I want to convert the. mat file into a. jpg file, but after the matlab code conversion, all the pictures are black. This is my code.

code
show

数据多次导入

我在训练时遇到了一个很基础的问题,数据会 load 3次,将 main 上方代码放到 main 函数中,数据就只导入一次,应该是哪里调用了 train.py,但是我没有找到,请问你们运行有这样的问题吗

MST++的PSNR34.32是ensembel 以后的结果吗?

作者您好,您的工作非常棒,令人激动!我有三个疑问,希望您能解答:
1)MST++的PSNR34.32是ensembel 以后的结果吗
2)train了300个epoch以后,是使用最后的pth来进行计算指标吗,还是根据测试集上的指标来选一个最好的模型
3)我按照您设置的参数进行训练,不知为何train了395个epoch...
训练命令(一张3090显卡):python train.py --method mst_plus_plus --batch_size 20 --end_epoch 300 --init_lr 4e-4 --outf ./exp/mst_plus_plus/ --patch_size 128 --stride 8 --gpu_id 0
[iter:395460/300000],lr=0.000092650,train_losses.avg=0.102350816
[iter:395480/300000],lr=0.000092685,train_losses.avg=0.102352098
[iter:395500/300000],lr=0.000092720,train_losses.avg=0.102352992
[iter:395520/300000],lr=0.000092755,train_losses.avg=0.102354914
[iter:395540/300000],lr=0.000092791,train_losses.avg=0.102356508
[iter:395560/300000],lr=0.000092826,train_losses.avg=0.102357790
然后,我用最后的模型跑出来的指标为:
load model from ../train_code/exp/mst_plus_plus/2022_04_23_14_46_46/net_395epoch.pth
method:mst_plus_plus, mrae:0.2037583440542221, rmse:0.029374651610851288, psnr:32.78853988647461
我感觉和您文中的指标还有一定差距,您觉得这正常吗,或者是有什么原因造成的(有别的epoch能跑到33.6左右的指标)
希望能得到您的解答!

Pred in show_line.m

Hello, thanks for sharing your amazing works.
I've run the show_simulation.m, and it works.
But when I've tried to run the show_line.m code in visualization folder, I got an error in 'pred' in line 6. What does pred mean?
Also in line 24, matlab doesn't know what is 'truth'.
Is there any file or toolbox that needs to be installed?

关于训练代数

您在论文里写的训练epoch是300,但为什么代码里是300*1000呢?还有为什么是取每20个epoch显示一次信息?我不会写训练代码,想学习学习,看得很困惑,希望能解答
Snipaste_2022-11-17_13-07-19
Snipaste_2022-11-17_13-07-30

关于数据集处理是否有误

您好,我有一个小疑惑,如图所示您在读取高光谱图像的时候,读取到cube高光谱立方体,立方体的形状为48251231,然而您通过hyper = np.transpose(hyper, [0, 2, 1])将其变为了(482,31,512)是否有误?我觉得应该是hyper = np.transpose(hyper, [2, 0, 1]),将形状变为(31,482,512),这样后续才能和label进行比对,您看我说的对吗?
ed39a5bd1d65361e7139aebb7b486dc
1e249b527a97eb974550c9f461b3d70

test评价

您好,在test代码中,用训练好的模型去测试50张RGB图像,得到的高光谱图像后发现只将其保存为了mat文件,并没有和label进行比对计算loss等等,请问是为什么呢?

Ensemble相关

请问最终Ensemble提交,在论文显示的测试集MRAE结果为0.1131的验证集MRAE和RMSE指标分数具体为多少呢

训练自己的数据集

8A341447B610373F24AFC2D426950746
您好,我将31通道改为28通道,训练自己的数据集,结果却出现了train_losses.avg非常奇怪,甚至等于nan的情况,您有遇到过这种问题吗?

show_line.m运行出现问题

您好,当我使用预测得到的高光谱数据运行show_line.m时,会弹出figure123且程序一直显示正忙。当我关闭figure123或者双击figure123的某个区域时会报错,我尝试了一些方法但都没有成功,请问该如何解决呢?

预测得到的高光谱数据结果波长求助

你好,很抱歉打扰到,请问用提供的MST++预训练模型预测自己的RGB图片得到的高光谱数据的31个波段具体是什么?可以帮忙解答一下嘛?非常感谢!

推断时显存溢出

大佬,我的显卡是A10, 24G显存,用咱们的代码做推断, batch_size=1是可以的,为什么batch_size开到2就显存溢出了,我看了一下模型加载没占用多少显存,为什么推断的时候显存会增长的这么快?

MAT++ SAB

image

您好,想问一下,这里MSAB是否缺少了一个layernorm?

image

Unable to reconstruct RGB from HSI

Hello,

Thank you so much for sharing your work! This looks super interesting.

I'm trying to use your algorithm for a VFX implementation to do a round trip from RGB -> HSI -> RGB.
I'm using the model mst_plus_plus.pth provided from Google Drive with the code provided here: https://github.com/caiyuanhao1998/MST-plus-plus/blob/master/predict_code/test.py

I'm then converting the HSI to RGB using the CIE (2006) 10-deg but also tried CIE_1964 from this repo by adding up the values for all wavelengths:

# spectral_array is the output from the prediction code.
height, width, lambda_count = spectral_array.shape
lambda_values = np.linspace(400, 700, lambda_count)
image_shape = (height, width, 3)
output = np.zeros(image_shape, np.float32)
for w in range(lambda_count):
    # rgb_dict is the ciexyz values
    rgb = rgb_dict[lambda_values[w]]
    for y in range(height):
        for x in range(width):
            output[y, x] += rgb * spectral_array[y, x, w]

I'm simply add up all the different colors from all 31 wavelengths and saving out the file after converting the CIE-XYZ-D65 to sRGB color space.
Unfortunately the resulting rgb values are about 16 times brighter, and don't really match up. I attached a reconstructed rgb image with reduced intensity (output /= 16).

The colors definitely match, so the overall distribution seems pretty good, but the intensity does not seem accounted for. Is there anything to consider to be able to reconstruct an image and preserve the intensity?

input
output

Loss very quickly reduces but PSNR and RMSE doesn't improve

Hi authors!

I appreciate your work done. It's quite inspiring!

I am trying to train MST++ from scratch. I just followed the commands mentioned but even after training for long time my RMSE, MRAE and PSNR doesn't improve. I have even tried using recommended environment settings. I am using same settings (i.e. batch size, lr scheduling etc.).

2022-08-23 10:03:27 - Iter[001000], Epoch[000001], learning rate : 0.000399989, Train Loss: 0.345733970, Test MRAE: 5.343356609, Test RMSE: 0.567134500, Test PSNR: 16.002279282
2022-08-23 10:20:02 - Iter[002000], Epoch[000002], learning rate : 0.000399956, Train Loss: 0.242199719, Test MRAE: 1.445869207, Test RMSE: 0.086180650, Test PSNR: 22.718479156
2022-08-23 10:36:37 - Iter[003000], Epoch[000003], learning rate : 0.000399902, Train Loss: 0.192822084, Test MRAE: 1.584116817, Test RMSE: 0.092025109, Test PSNR: 22.301456451
2022-08-23 10:53:12 - Iter[004000], Epoch[000004], learning rate : 0.000399825, Train Loss: 0.180765674, Test MRAE: 0.614064872, Test RMSE: 0.070524208, Test PSNR: 25.304067612
2022-08-23 11:09:47 - Iter[005000], Epoch[000005], learning rate : 0.000399727, Train Loss: 0.175371751, Test MRAE: 0.542492449, Test RMSE: 0.088396654, Test PSNR: 23.954675674
2022-08-23 11:26:22 - Iter[006000], Epoch[000006], learning rate : 0.000399606, Train Loss: 0.175960690, Test MRAE: 1.136486292, Test RMSE: 0.074130528, Test PSNR: 24.200265884
2022-08-23 11:42:57 - Iter[007000], Epoch[000007], learning rate : 0.000399464, Train Loss: 0.179268226, Test MRAE: 1.133757234, Test RMSE: 0.074117400, Test PSNR: 24.215122223
2022-08-23 11:59:33 - Iter[008000], Epoch[000008], learning rate : 0.000399301, Train Loss: 0.183103830, Test MRAE: 0.690664709, Test RMSE: 0.125816882, Test PSNR: 20.855083466
2022-08-23 12:16:08 - Iter[009000], Epoch[000009], learning rate : 0.000399115, Train Loss: 0.180019125, Test MRAE: 0.625649035, Test RMSE: 0.117592424, Test PSNR: 21.640802383
2022-08-23 12:32:44 - Iter[010000], Epoch[000010], learning rate : 0.000398907, Train Loss: 0.175567344, Test MRAE: 0.705689073, Test RMSE: 0.063903458, Test PSNR: 26.012899399
2022-08-23 12:49:19 - Iter[011000], Epoch[000011], learning rate : 0.000398678, Train Loss: 0.173624143, Test MRAE: 0.719805300, Test RMSE: 0.071538091, Test PSNR: 25.122806549
2022-08-23 13:05:55 - Iter[012000], Epoch[000012], learning rate : 0.000398427, Train Loss: 0.167626292, Test MRAE: 1.066743374, Test RMSE: 0.067167260, Test PSNR: 25.091667175
2022-08-23 13:22:30 - Iter[013000], Epoch[000013], learning rate : 0.000398154, Train Loss: 0.159311280, Test MRAE: 1.099061131, Test RMSE: 0.075757533, Test PSNR: 24.224294662
2022-08-23 13:39:06 - Iter[014000], Epoch[000014], learning rate : 0.000397860, Train Loss: 0.158835545, Test MRAE: 0.538660467, Test RMSE: 0.094124049, Test PSNR: 23.704330444
2022-08-23 13:55:43 - Iter[015000], Epoch[000015], learning rate : 0.000397544, Train Loss: 0.159253776, Test MRAE: 0.668565333, Test RMSE: 0.091043182, Test PSNR: 23.187345505
2022-08-23 15:38:40 - Iter[001000], Epoch[000001], learning rate : 0.000399989, Train Loss: 0.127697811, Test MRAE: 0.745330155, Test RMSE: 0.062896006, Test PSNR: 25.861749649
2022-08-23 15:48:46 - Iter[002000], Epoch[000002], learning rate : 0.000399956, Train Loss: 0.115349077, Test MRAE: 1.359989882, Test RMSE: 0.084344082, Test PSNR: 23.167493820
2022-08-23 16:01:59 - Iter[003000], Epoch[000003], learning rate : 0.000399902, Train Loss: 0.103145719, Test MRAE: 1.378445268, Test RMSE: 0.090944812, Test PSNR: 22.637052536
2022-08-23 16:16:06 - Iter[004000], Epoch[000004], learning rate : 0.000399825, Train Loss: 0.111022204, Test MRAE: 0.556447327, Test RMSE: 0.065509431, Test PSNR: 26.170299530
2022-08-23 16:30:18 - Iter[005000], Epoch[000005], learning rate : 0.000399727, Train Loss: 0.115231283, Test MRAE: 0.518974543, Test RMSE: 0.084985338, Test PSNR: 24.737178802
2022-08-23 16:44:30 - Iter[006000], Epoch[000006], learning rate : 0.000399606, Train Loss: 0.132773608, Test MRAE: 1.072223663, Test RMSE: 0.074294783, Test PSNR: 24.498712540
2022-08-23 16:55:09 - Iter[007000], Epoch[000007], learning rate : 0.000399464, Train Loss: 0.140846550, Test MRAE: 1.058269382, Test RMSE: 0.073174037, Test PSNR: 24.492546082

What am I doing wrong?

I would appreciate if you could help me.

Regards!

Performance sensitive to Pytorch and CUDA environment.

Hi, thanks for such an interesting work. I observed that the model performs well and good with pytorch 1.8 . However, when I try doing the same in latest pytorch 1.12 and CUDA 11.7, the same model starts overfitting on the training data, and test MRAE does not go down below 0.42. Similarily in pytorch 1.2, the training MRAE starts oscillating around 0.5 and does not change much. Does it mean that the performance of model is highly sensitive to the pytorch or cuda version??

你好,请问直接生成的数据是什么格式的?

很厉害的工作!请问直接如果应用你们预训练的模型进行RGB光谱重建的话,生成的高光谱图像是什么范围多少波长的高光谱数据?是论文中提到的400nm-700nm的31波长的数据吗?这31个波长是设置的还是随机的?

指标问题

您好,十分感谢公布代码,我有一个问题想问一下,我在NTIRE2022上根据你们的代码去跑,为什么MPRNet的精度会比MST++以及MST好呢,同时MST++的精度也会比MST稍微低一点。

How to get the predict results

Thanks for your open-source code. The MST++ is an amzing project in HSI reconstruction scene. But your code only have train and test code, which not contains the predict code. I'm a people of a new type of HSI reconstruction and I don't have any idea about the predict results. So, May you open your predict code in your repository?

定量指标与原文不符

感谢分享,我使用了该代码在所提供得预训练模型中推理得到得定量指标与原文不符,请问是什么问题所导致得呢?
image

MST++和Restormer的区别

MST++仅使用1/10的参数量取得比Restormer更好的效果,我看了下代码觉得二者的模型架构差不多,请问一下是哪一步修改产生如此巨大的改进吗?

我对比了下代码,区别主要有

  1. Restormer使用门控FFN,MST++取消了门控
  2. 和Restormer相比,MST++去掉了通道注意力模块(Spectral-wise Multi-head Self-Attention) 中产生Q,K,V向量的DW Conv
  3. 和Restormer相比,MST++在通道注意力模块增加了卷积位置编码
  4. MST++去掉了通道注意力模块的LayerNorm

关于layernorm的疑问

197667673-750822f8-aa2c-4491-9d2a-5ed7e55a1ca8 作者您好,根据您在 https://github.com//issues/20 中的回答,这部分等同于layernorm,但是这个地方不是在做l2范数归一化吗?l2范数归一化在这里为什么能够等同于layernorm?为什么在代码实现上不直接用layernorm呢?希望您能够帮助我解惑,谢谢

Why not use a random crop?

First of all, thanks for sharing the code.

It was found that the Dataset code brings data uniformly based on a certain stride and patch size.

Random cropping is usually used for the task of reconstructing spatial information of an RGB image. Is there a reason for using a uniform patch image in this code?

制作自己的数据集

你好,我想制作自己的数据集,但是我从相机的源数据提取mat文件出现了大量的负值,请问你们在处理mat中含有负值有什么方法?
我认为我出现负值是因为对数据的黑白平衡导致
data

训练公平性比较

学长您好,看了学长的论文,论文中这些图中的其他模型都是学长自己在NTIRE2022的数据集上重新训练和测试的吗?同时在项目中看到学长在训练这些网络时候的实验设置并不是完全一致的,比如学习率的设置。请问实验设置并不一致的情况下进行实验对比,这对其他模型来说是否公平呢?比如,有可能其他模型比如Restormer在另一个实验设置下表现更好,但是学长自己训练的时候没有把Restormer训练到最佳。
image

请问下,我通过cv方法将mat文件转换为的png图片为何是单通道而不是四通道的?

import h5py
import cv2
import numpy as np
path = "ARAD_1K_0912.mat"
with h5py.File(path, 'r') as mat:
hyper = np.float32(np.array(mat['cube']))*255
cv2.imwrite('ARAD_1K_0912.png', hyper[15,:,:])
作者大佬您好,我通过您提供的代码将预测结果的mat文件转换为了png图片,但是想请问下为何转换后的图片只有单通道?不知我是否理解有误,我在介绍里看着好像转换后的高光谱图像是四通道的?可否解答下我的困惑,感谢您了

我检测的PSNR与您的不符

作者您好!针对您的高光谱重建工作,我按照您的方法能跑出啦,也能通过训练的模型进行重建一些图片,可是,在PSNR这个重要验证指标,我测出的数据与您有很大差异,就是,我的PSNR在18-19之间,而您的是34+,所以我就去测试了您上传训练好的MST++模型,经过测试,也是18-19,所以我就不明白我哪里出错了,环境是按照您的环境搭的。我百思不得其解,故通过此渠道寻求您的帮助。如果您能在闲暇之余给予高见,我将感激不尽,谢谢!

关于预测,请问需要多大显存的GPU?

您好,感谢您的分享,我想使用您预训练的模型对自己的RGB图片进行测试,图片在5M左右,请问8G的显存足够吗?还有就是CUDA和pytorch版本使用最新的版本会不会导致冲突?

AWAN Cube shape

AWAN pretrained model的reconstructed .mat的cube是246x276x31,跟Ground truth的482x512x31不一样
可是MRAE, RMSE, PSNR显示正确,应该如何处理呢?

数据集处理疑问

image
您好,在阅读您的代码时,发现了对训练集加载的时候,有这样两段代码。不是很理解他的用意,我计算得self.patch_per_img为2205,那么每一张图片的索引为什么会是idx//self.patch_per_img?而idx从0-900,那岂不是img_idx都是0了。
img_idx, patch_idx = idx//self.patch_per_img, idx%self.patch_per_img
h_idx, w_idx = patch_idx//self.patch_per_line, patch_idx%self.patch_per_line

请问为什么计算评价指标,只使用图像的中间区域计算呢?

loss_mrae = criterion_mrae(output[:, :, 128:-128, 128:-128], target[:, :, 128:-128, 128:-128])
loss_rmse = criterion_rmse(output[:, :, 128:-128, 128:-128], target[:, :, 128:-128, 128:-128])
loss_psnr = criterion_psnr(output[:, :, 128:-128, 128:-128], target[:, :, 128:-128, 128:-128])
而不是用
loss_mrae = criterion_mrae(output, target)
loss_rmse = criterion_rmse(output, target)
loss_psnr = criterion_psnr(output, target)

请问能分享训练集嘛

您好,您的工作很棒,但是我没有在相关的网页找到训练集,请问您能分享相关的训练级嘛

MST 训练

尊敬的作者您好,
python train.py --method mst --batch_size 20 --end_epoch 300 --init_lr 4e-4 --outf ./exp/mst/ --data_root ../dataset/ --patch_size 128 --stride 8 --gpu_id 0
使用上面的命令运行,得到的结果是这样的,别的模型是可以正常跑通的,只有这个mst,是我哪里弄错了吗?希望能得到您的解答!
2022-04-23 09:47:00 - Iter[001000], Epoch[000001], learning rate : 0.000399989, Train Loss: 0.646447778, Test MRAE: 0.531796575, Test RMSE: 0.091432519, Test PSNR: 19.090953827
2022-04-23 10:02:48 - Iter[002000], Epoch[000002], learning rate : 0.000399956, Train Loss: 0.578481972, Test MRAE: 0.491525143, Test RMSE: 0.086475834, Test PSNR: 19.247301102
2022-04-23 10:18:36 - Iter[003000], Epoch[000003], learning rate : 0.000399902, Train Loss: 0.547274351, Test MRAE: 0.494777530, Test RMSE: 0.079640247, Test PSNR: 19.196660995
2022-04-23 10:34:24 - Iter[004000], Epoch[000004], learning rate : 0.000399825, Train Loss: 0.526349247, Test MRAE: 0.472677380, Test RMSE: 0.069328219, Test PSNR: 19.027240753
2022-04-23 10:50:12 - Iter[005000], Epoch[000005], learning rate : 0.000399727, Train Loss: 0.511086941, Test MRAE: 0.358050972, Test RMSE: 0.059412364, Test PSNR: 19.215837479
2022-04-23 11:06:00 - Iter[006000], Epoch[000006], learning rate : 0.000399606, Train Loss: 0.499529332, Test MRAE: 0.356865495, Test RMSE: 0.055258289, Test PSNR: 19.074251175
2022-04-23 11:21:49 - Iter[007000], Epoch[000007], learning rate : 0.000399464, Train Loss: 0.489558429, Test MRAE: 0.378400117, Test RMSE: 0.057141136, Test PSNR: 19.100608826
2022-04-23 11:37:37 - Iter[008000], Epoch[000008], learning rate : 0.000399301, Train Loss: 0.481531292, Test MRAE: 0.362446398, Test RMSE: 0.056064066, Test PSNR: 19.177246094
2022-04-23 11:53:25 - Iter[009000], Epoch[000009], learning rate : 0.000399115, Train Loss: 0.474746048, Test MRAE: 0.331809163, Test RMSE: 0.051642809, Test PSNR: 19.225530624
2022-04-23 12:09:13 - Iter[010000], Epoch[000010], learning rate : 0.000398907, Train Loss: 0.468800724, Test MRAE: 0.300195664, Test RMSE: 0.046801656, Test PSNR: 19.080495834
2022-04-23 12:25:01 - Iter[011000], Epoch[000011], learning rate : 0.000398678, Train Loss: 0.463295877, Test MRAE: 0.327608109, Test RMSE: 0.050521433, Test PSNR: 19.182096481
2022-04-23 12:40:49 - Iter[012000], Epoch[000012], learning rate : 0.000398427, Train Loss: 0.458517045, Test MRAE: 0.422704875, Test RMSE: 0.066085078, Test PSNR: 19.202255249
2022-04-23 12:56:38 - Iter[013000], Epoch[000013], learning rate : 0.000398154, Train Loss: 0.453951567, Test MRAE: 0.444770008, Test RMSE: 0.066878960, Test PSNR: 18.973522186
2022-04-23 13:12:27 - Iter[014000], Epoch[000014], learning rate : 0.000397860, Train Loss: 0.447406828, Test MRAE: 0.336395442, Test RMSE: 0.050711267, Test PSNR: 19.189912796
2022-04-23 13:28:16 - Iter[015000], Epoch[000015], learning rate : 0.000397544, Train Loss: 0.440287501, Test MRAE: 0.364379525, Test RMSE: 0.054785427, Test PSNR: 19.161180496
2022-04-23 13:44:06 - Iter[016000], Epoch[000016], learning rate : 0.000397207, Train Loss: 0.432897270, Test MRAE: 0.370432645, Test RMSE: 0.056947071, Test PSNR: 19.268909454
2022-04-23 13:59:54 - Iter[017000], Epoch[000017], learning rate : 0.000396847, Train Loss: 0.425944477, Test MRAE: 0.386497170, Test RMSE: 0.057578120, Test PSNR: 19.306638718
2022-04-23 14:15:41 - Iter[018000], Epoch[000018], learning rate : 0.000396467, Train Loss: 0.419327229, Test MRAE: 0.381819218, Test RMSE: 0.054971345, Test PSNR: 19.137437820
2022-04-23 14:31:30 - Iter[019000], Epoch[000019], learning rate : 0.000396065, Train Loss: 0.412961543, Test MRAE: 0.268086284, Test RMSE: 0.039695993, Test PSNR: 19.237449646
2022-04-23 14:47:18 - Iter[020000], Epoch[000020], learning rate : 0.000395641, Train Loss: 0.406925291, Test MRAE: 0.278270304, Test RMSE: 0.040124148, Test PSNR: 19.128618240
2022-04-23 15:03:06 - Iter[021000], Epoch[000021], learning rate : 0.000395196, Train Loss: 0.401020914, Test MRAE: 0.263858318, Test RMSE: 0.039533190, Test PSNR: 19.288784027
2022-04-23 15:18:55 - Iter[022000], Epoch[000022], learning rate : 0.000394729, Train Loss: 0.395406336, Test MRAE: 0.293986678, Test RMSE: 0.042647284, Test PSNR: 19.238164902
2022-04-23 15:34:43 - Iter[023000], Epoch[000023], learning rate : 0.000394242, Train Loss: 0.390212119, Test MRAE: 0.291280866, Test RMSE: 0.042363208, Test PSNR: 19.284023285
2022-04-23 15:50:32 - Iter[024000], Epoch[000024], learning rate : 0.000393733, Train Loss: 0.385213077, Test MRAE: 0.233923718, Test RMSE: 0.036982559, Test PSNR: 19.216468811
2022-04-23 16:06:20 - Iter[025000], Epoch[000025], learning rate : 0.000393203, Train Loss: 0.380559415, Test MRAE: 0.260465741, Test RMSE: 0.039078295, Test PSNR: 19.235170364
2022-04-23 16:22:09 - Iter[026000], Epoch[000026], learning rate : 0.000392651, Train Loss: 0.375919461, Test MRAE: 0.254196465, Test RMSE: 0.036160678, Test PSNR: 19.114507675
2022-04-23 16:37:58 - Iter[027000], Epoch[000027], learning rate : 0.000392079, Train Loss: 0.371591926, Test MRAE: 0.314058006, Test RMSE: 0.044734038, Test PSNR: 19.360179901
2022-04-23 16:53:49 - Iter[028000], Epoch[000028], learning rate : 0.000391486, Train Loss: 0.367457062, Test MRAE: 0.226818457, Test RMSE: 0.031798869, Test PSNR: 18.716556549
2022-04-23 17:09:37 - Iter[029000], Epoch[000029], learning rate : 0.000390872, Train Loss: 0.363370985, Test MRAE: 0.271478027, Test RMSE: 0.041398086, Test PSNR: 19.368923187
2022-04-23 17:25:25 - Iter[030000], Epoch[000030], learning rate : 0.000390236, Train Loss: 0.359518796, Test MRAE: 0.293181419, Test RMSE: 0.040424421, Test PSNR: 19.312316895
2022-04-23 17:41:13 - Iter[031000], Epoch[000031], learning rate : 0.000389580, Train Loss: 0.355821848, Test MRAE: 0.245824501, Test RMSE: 0.034910016, Test PSNR: 19.084083557
2022-04-23 17:57:02 - Iter[032000], Epoch[000032], learning rate : 0.000388904, Train Loss: 0.352144718, Test MRAE: 0.266790211, Test RMSE: 0.040394772, Test PSNR: 19.260843277
2022-04-23 18:12:53 - Iter[033000], Epoch[000033], learning rate : 0.000388206, Train Loss: 0.348656774, Test MRAE: 0.222802311, Test RMSE: 0.034814272, Test PSNR: 19.181829453
2022-04-23 18:28:42 - Iter[034000], Epoch[000034], learning rate : 0.000387488, Train Loss: 0.345295727, Test MRAE: 0.298925221, Test RMSE: 0.043574888, Test PSNR: 19.372837067
2022-04-23 18:44:32 - Iter[035000], Epoch[000035], learning rate : 0.000386750, Train Loss: 0.342086405, Test MRAE: 0.234956443, Test RMSE: 0.036083721, Test PSNR: 19.267917633
2022-04-23 19:00:22 - Iter[036000], Epoch[000036], learning rate : 0.000385991, Train Loss: 0.338931412, Test MRAE: 0.228018716, Test RMSE: 0.032620184, Test PSNR: 19.047657013
2022-04-23 19:16:11 - Iter[037000], Epoch[000037], learning rate : 0.000385212, Train Loss: 0.335954189, Test MRAE: 0.229232669, Test RMSE: 0.034972440, Test PSNR: 19.219429016
2022-04-23 19:31:59 - Iter[038000], Epoch[000038], learning rate : 0.000384413, Train Loss: 0.333125830, Test MRAE: 0.230264142, Test RMSE: 0.034406129, Test PSNR: 19.242214203
2022-04-23 19:47:48 - Iter[039000], Epoch[000039], learning rate : 0.000383593, Train Loss: 0.330276132, Test MRAE: 0.274099082, Test RMSE: 0.043192532, Test PSNR: 19.348083496
2022-04-23 20:03:35 - Iter[040000], Epoch[000040], learning rate : 0.000382753, Train Loss: 0.327579051, Test MRAE: 0.218261853, Test RMSE: 0.031082967, Test PSNR: 18.932905197
2022-04-23 20:19:24 - Iter[041000], Epoch[000041], learning rate : 0.000381893, Train Loss: 0.324890018, Test MRAE: 0.254400879, Test RMSE: 0.038573589, Test PSNR: 19.314989090
2022-04-23 20:35:13 - Iter[042000], Epoch[000042], learning rate : 0.000381014, Train Loss: 0.322247148, Test MRAE: 0.259322703, Test RMSE: 0.038424168, Test PSNR: 19.213300705
2022-04-23 20:51:02 - Iter[043000], Epoch[000043], learning rate : 0.000380115, Train Loss: 0.319739074, Test MRAE: 0.238067225, Test RMSE: 0.035432223, Test PSNR: 19.308258057
2022-04-23 21:06:51 - Iter[044000], Epoch[000044], learning rate : 0.000379195, Train Loss: 0.317296684, Test MRAE: 0.216048419, Test RMSE: 0.031597815, Test PSNR: 19.090858459
2022-04-23 21:22:38 - Iter[045000], Epoch[000045], learning rate : 0.000378257, Train Loss: 0.314926445, Test MRAE: 0.235811070, Test RMSE: 0.034594744, Test PSNR: 19.259517670
2022-04-23 21:38:27 - Iter[046000], Epoch[000046], learning rate : 0.000377299, Train Loss: 0.312630028, Test MRAE: 0.209554464, Test RMSE: 0.030831426, Test PSNR: 19.153333664
2022-04-23 21:54:16 - Iter[047000], Epoch[000047], learning rate : 0.000376321, Train Loss: 0.310319424, Test MRAE: 0.214102373, Test RMSE: 0.031406451, Test PSNR: 19.192979813
2022-04-23 22:10:04 - Iter[048000], Epoch[000048], learning rate : 0.000375324, Train Loss: 0.308094501, Test MRAE: 0.199715927, Test RMSE: 0.030652732, Test PSNR: 19.189508438
2022-04-23 22:25:51 - Iter[049000], Epoch[000049], learning rate : 0.000374308, Train Loss: 0.305966139, Test MRAE: 0.218578279, Test RMSE: 0.031519547, Test PSNR: 19.050167084
2022-04-23 22:41:40 - Iter[050000], Epoch[000050], learning rate : 0.000373273, Train Loss: 0.303862125, Test MRAE: 0.215297714, Test RMSE: 0.032994971, Test PSNR: 19.179225922
2022-04-23 22:57:28 - Iter[051000], Epoch[000051], learning rate : 0.000372219, Train Loss: 0.301793545, Test MRAE: 0.231257230, Test RMSE: 0.032595847, Test PSNR: 19.175262451
2022-04-23 23:13:17 - Iter[052000], Epoch[000052], learning rate : 0.000371146, Train Loss: 0.299778074, Test MRAE: 0.253783792, Test RMSE: 0.037507851, Test PSNR: 19.363800049
2022-04-23 23:29:05 - Iter[053000], Epoch[000053], learning rate : 0.000370055, Train Loss: 0.297876358, Test MRAE: 0.258977175, Test RMSE: 0.040128287, Test PSNR: 19.314584732
2022-04-23 23:44:53 - Iter[054000], Epoch[000054], learning rate : 0.000368945, Train Loss: 0.295973837, Test MRAE: 0.217594683, Test RMSE: 0.032133736, Test PSNR: 19.153348923
2022-04-24 00:00:41 - Iter[055000], Epoch[000055], learning rate : 0.000367816, Train Loss: 0.294095606, Test MRAE: 0.216711819, Test RMSE: 0.031819437, Test PSNR: 19.180938721
2022-04-24 00:16:29 - Iter[056000], Epoch[000056], learning rate : 0.000366669, Train Loss: 0.292232960, Test MRAE: 0.207258597, Test RMSE: 0.029869573, Test PSNR: 19.150335312
2022-04-24 00:32:17 - Iter[057000], Epoch[000057], learning rate : 0.000365504, Train Loss: 0.290452927, Test MRAE: 0.198026657, Test RMSE: 0.028679363, Test PSNR: 19.005662918
2022-04-24 00:48:05 - Iter[058000], Epoch[000058], learning rate : 0.000364320, Train Loss: 0.288685858, Test MRAE: 0.221201345, Test RMSE: 0.033238016, Test PSNR: 19.217224121
2022-04-24 01:03:54 - Iter[059000], Epoch[000059], learning rate : 0.000363119, Train Loss: 0.287011743, Test MRAE: 0.213097245, Test RMSE: 0.032143150, Test PSNR: 19.209981918
2022-04-24 01:19:42 - Iter[060000], Epoch[000060], learning rate : 0.000361900, Train Loss: 0.285367638, Test MRAE: 0.193839341, Test RMSE: 0.027465345, Test PSNR: 18.971504211
2022-04-24 01:35:30 - Iter[061000], Epoch[000061], learning rate : 0.000360663, Train Loss: 0.283741534, Test MRAE: 0.196817130, Test RMSE: 0.029001579, Test PSNR: 19.062067032
2022-04-24 01:51:19 - Iter[062000], Epoch[000062], learning rate : 0.000359409, Train Loss: 0.282072276, Test MRAE: 0.219744995, Test RMSE: 0.029378273, Test PSNR: 18.866895676
2022-04-24 02:07:07 - Iter[063000], Epoch[000063], learning rate : 0.000358137, Train Loss: 0.280501366, Test MRAE: 0.229305848, Test RMSE: 0.033509906, Test PSNR: 19.214229584
2022-04-24 02:22:55 - Iter[064000], Epoch[000064], learning rate : 0.000356848, Train Loss: 0.278931111, Test MRAE: 0.206814021, Test RMSE: 0.030476322, Test PSNR: 19.040904999
2022-04-24 02:38:42 - Iter[065000], Epoch[000065], learning rate : 0.000355542, Train Loss: 0.277395368, Test MRAE: 0.208180115, Test RMSE: 0.031854365, Test PSNR: 19.156114578
2022-04-24 02:54:30 - Iter[066000], Epoch[000066], learning rate : 0.000354219, Train Loss: 0.275906801, Test MRAE: 0.195947483, Test RMSE: 0.030340478, Test PSNR: 19.152935028
2022-04-24 03:10:18 - Iter[067000], Epoch[000067], learning rate : 0.000352879, Train Loss: 0.274478436, Test MRAE: 0.220566273, Test RMSE: 0.032208432, Test PSNR: 18.987400055
2022-04-24 03:26:06 - Iter[068000], Epoch[000068], learning rate : 0.000351522, Train Loss: 0.273031890, Test MRAE: 0.198420197, Test RMSE: 0.029046385, Test PSNR: 18.940135956
2022-04-24 03:41:54 - Iter[069000], Epoch[000069], learning rate : 0.000350149, Train Loss: 0.271648794, Test MRAE: 0.240019321, Test RMSE: 0.034521896, Test PSNR: 19.117261887
2022-04-24 03:57:42 - Iter[070000], Epoch[000070], learning rate : 0.000348759, Train Loss: 0.270240217, Test MRAE: 0.203085661, Test RMSE: 0.028956201, Test PSNR: 19.012472153
2022-04-24 04:13:31 - Iter[071000], Epoch[000071], learning rate : 0.000347353, Train Loss: 0.268938541, Test MRAE: 0.213608027, Test RMSE: 0.031390432, Test PSNR: 18.951906204
2022-04-24 04:29:19 - Iter[072000], Epoch[000072], learning rate : 0.000345931, Train Loss: 0.267599642, Test MRAE: 0.231374159, Test RMSE: 0.031735256, Test PSNR: 19.098361969
2022-04-24 04:45:06 - Iter[073000], Epoch[000073], learning rate : 0.000344493, Train Loss: 0.266298503, Test MRAE: 0.250184625, Test RMSE: 0.035862882, Test PSNR: 19.221879959
2022-04-24 05:00:54 - Iter[074000], Epoch[000074], learning rate : 0.000343039, Train Loss: 0.264970332, Test MRAE: 0.216034293, Test RMSE: 0.032297533, Test PSNR: 19.119438171
2022-04-24 05:16:41 - Iter[075000], Epoch[000075], learning rate : 0.000341569, Train Loss: 0.263698131, Test MRAE: 0.236452579, Test RMSE: 0.034641251, Test PSNR: 19.267896652
2022-04-24 05:32:29 - Iter[076000], Epoch[000076], learning rate : 0.000340084, Train Loss: 0.262492269, Test MRAE: 0.226200759, Test RMSE: 0.032904580, Test PSNR: 19.063945770
2022-04-24 05:48:17 - Iter[077000], Epoch[000077], learning rate : 0.000338584, Train Loss: 0.261239469, Test MRAE: 0.201683655, Test RMSE: 0.029232167, Test PSNR: 18.907796860
2022-04-24 06:04:04 - Iter[078000], Epoch[000078], learning rate : 0.000337069, Train Loss: 0.260039657, Test MRAE: 0.214427084, Test RMSE: 0.031692069, Test PSNR: 18.943357468
2022-04-24 06:19:52 - Iter[079000], Epoch[000079], learning rate : 0.000335538, Train Loss: 0.258839667, Test MRAE: 0.209781334, Test RMSE: 0.030609982, Test PSNR: 19.051986694
2022-04-24 06:35:40 - Iter[080000], Epoch[000080], learning rate : 0.000333993, Train Loss: 0.257642329, Test MRAE: 0.192540377, Test RMSE: 0.029445488, Test PSNR: 19.034675598
2022-04-24 06:51:25 - Iter[081000], Epoch[000081], learning rate : 0.000332433, Train Loss: 0.256517678, Test MRAE: 0.222092792, Test RMSE: 0.030401962, Test PSNR: 18.930913925
2022-04-24 07:07:08 - Iter[082000], Epoch[000082], learning rate : 0.000330859, Train Loss: 0.255394250, Test MRAE: 0.209056050, Test RMSE: 0.029050352, Test PSNR: 19.042034149
2022-04-24 07:22:51 - Iter[083000], Epoch[000083], learning rate : 0.000329270, Train Loss: 0.254281640, Test MRAE: 0.214589760, Test RMSE: 0.030067844, Test PSNR: 18.710792542
2022-04-24 07:38:34 - Iter[084000], Epoch[000084], learning rate : 0.000327668, Train Loss: 0.253181159, Test MRAE: 0.198045820, Test RMSE: 0.028749663, Test PSNR: 19.026756287
2022-04-24 07:54:19 - Iter[085000], Epoch[000085], learning rate : 0.000326051, Train Loss: 0.252079219, Test MRAE: 0.212200597, Test RMSE: 0.030908102, Test PSNR: 19.004436493
2022-04-24 08:10:03 - Iter[086000], Epoch[000086], learning rate : 0.000324421, Train Loss: 0.251013666, Test MRAE: 0.205034807, Test RMSE: 0.029491324, Test PSNR: 19.155452728
2022-04-24 08:25:51 - Iter[087000], Epoch[000087], learning rate : 0.000322777, Train Loss: 0.249932632, Test MRAE: 0.192795992, Test RMSE: 0.028440528, Test PSNR: 19.070756912
2022-04-24 08:41:33 - Iter[088000], Epoch[000088], learning rate : 0.000321119, Train Loss: 0.248863876, Test MRAE: 0.206967190, Test RMSE: 0.028357433, Test PSNR: 18.958072662
2022-04-24 08:57:17 - Iter[089000], Epoch[000089], learning rate : 0.000319449, Train Loss: 0.247781381, Test MRAE: 0.212967843, Test RMSE: 0.029828170, Test PSNR: 18.949298859
2022-04-24 09:12:59 - Iter[090000], Epoch[000090], learning rate : 0.000317765, Train Loss: 0.246814713, Test MRAE: 0.217871219, Test RMSE: 0.031028254, Test PSNR: 19.021213531
2022-04-24 09:28:41 - Iter[091000], Epoch[000091], learning rate : 0.000316068, Train Loss: 0.245769218, Test MRAE: 0.222220868, Test RMSE: 0.031629700, Test PSNR: 19.008470535
2022-04-24 09:44:25 - Iter[092000], Epoch[000092], learning rate : 0.000314359, Train Loss: 0.244765893, Test MRAE: 0.201765835, Test RMSE: 0.030243544, Test PSNR: 19.158527374
2022-04-24 10:00:07 - Iter[093000], Epoch[000093], learning rate : 0.000312637, Train Loss: 0.243786022, Test MRAE: 0.238795847, Test RMSE: 0.035489723, Test PSNR: 19.237535477
2022-04-24 10:15:52 - Iter[094000], Epoch[000094], learning rate : 0.000310903, Train Loss: 0.242815033, Test MRAE: 0.227278069, Test RMSE: 0.032577001, Test PSNR: 19.165273666
2022-04-24 10:31:36 - Iter[095000], Epoch[000095], learning rate : 0.000309157, Train Loss: 0.241861641, Test MRAE: 0.206680715, Test RMSE: 0.029225241, Test PSNR: 19.057720184
2022-04-24 10:47:19 - Iter[096000], Epoch[000096], learning rate : 0.000307399, Train Loss: 0.240916863, Test MRAE: 0.221876547, Test RMSE: 0.031988315, Test PSNR: 19.071466446
2022-04-24 11:03:01 - Iter[097000], Epoch[000097], learning rate : 0.000305629, Train Loss: 0.239972606, Test MRAE: 0.205407277, Test RMSE: 0.029455291, Test PSNR: 18.912952423
2022-04-24 11:18:43 - Iter[098000], Epoch[000098], learning rate : 0.000303848, Train Loss: 0.239050597, Test MRAE: 0.218140364, Test RMSE: 0.031495962, Test PSNR: 19.028364182
2022-04-24 11:34:26 - Iter[099000], Epoch[000099], learning rate : 0.000302056, Train Loss: 0.145563006, Test MRAE: 0.212117374, Test RMSE: 0.029292498, Test PSNR: 18.875003815
2022-04-24 11:50:08 - Iter[100000], Epoch[000100], learning rate : 0.000300252, Train Loss: 0.145217299, Test MRAE: 0.214958370, Test RMSE: 0.030232767, Test PSNR: 18.982784271
2022-04-24 12:05:50 - Iter[101000], Epoch[000101], learning rate : 0.000298437, Train Loss: 0.146453694, Test MRAE: 0.227019936, Test RMSE: 0.032480333, Test PSNR: 19.083282471
2022-04-24 12:21:38 - Iter[102000], Epoch[000102], learning rate : 0.000296612, Train Loss: 0.146322653, Test MRAE: 0.239566207, Test RMSE: 0.035496548, Test PSNR: 19.104885101
2022-04-24 12:37:20 - Iter[103000], Epoch[000103], learning rate : 0.000294776, Train Loss: 0.146285564, Test MRAE: 0.191646069, Test RMSE: 0.028251326, Test PSNR: 19.006746292
2022-04-24 12:53:03 - Iter[104000], Epoch[000104], learning rate : 0.000292929, Train Loss: 0.146255314, Test MRAE: 0.234831825, Test RMSE: 0.034441300, Test PSNR: 19.204271317
2022-04-24 13:08:45 - Iter[105000], Epoch[000105], learning rate : 0.000291073, Train Loss: 0.145497769, Test MRAE: 0.207711518, Test RMSE: 0.030052185, Test PSNR: 19.077249527
2022-04-24 13:24:27 - Iter[106000], Epoch[000106], learning rate : 0.000289207, Train Loss: 0.145523682, Test MRAE: 0.222439647, Test RMSE: 0.032102700, Test PSNR: 19.164697647
2022-04-24 13:40:12 - Iter[107000], Epoch[000107], learning rate : 0.000287330, Train Loss: 0.145412102, Test MRAE: 0.200026944, Test RMSE: 0.027787490, Test PSNR: 18.735790253
2022-04-24 13:56:02 - Iter[108000], Epoch[000108], learning rate : 0.000285445, Train Loss: 0.145686775, Test MRAE: 0.211092830, Test RMSE: 0.031725895, Test PSNR: 19.057603836
2022-04-24 14:11:52 - Iter[109000], Epoch[000109], learning rate : 0.000283550, Train Loss: 0.145138130, Test MRAE: 0.202462837, Test RMSE: 0.029094383, Test PSNR: 19.105033875
2022-04-24 14:27:39 - Iter[110000], Epoch[000110], learning rate : 0.000281646, Train Loss: 0.144788370, Test MRAE: 0.204958782, Test RMSE: 0.028470399, Test PSNR: 19.004680634
2022-04-24 14:43:25 - Iter[111000], Epoch[000111], learning rate : 0.000279733, Train Loss: 0.144440651, Test MRAE: 0.194457725, Test RMSE: 0.028255261, Test PSNR: 18.978731155
2022-04-24 14:59:13 - Iter[112000], Epoch[000112], learning rate : 0.000277811, Train Loss: 0.144009396, Test MRAE: 0.197285131, Test RMSE: 0.028200772, Test PSNR: 18.996498108
2022-04-24 15:15:00 - Iter[113000], Epoch[000113], learning rate : 0.000275881, Train Loss: 0.143576682, Test MRAE: 0.203258514, Test RMSE: 0.029270183, Test PSNR: 18.912971497
2022-04-24 15:30:45 - Iter[114000], Epoch[000114], learning rate : 0.000273943, Train Loss: 0.143229470, Test MRAE: 0.219165146, Test RMSE: 0.031687632, Test PSNR: 19.071453094
2022-04-24 15:46:33 - Iter[115000], Epoch[000115], learning rate : 0.000271996, Train Loss: 0.142740473, Test MRAE: 0.216698647, Test RMSE: 0.031449940, Test PSNR: 19.045347214
2022-04-24 16:02:23 - Iter[116000], Epoch[000116], learning rate : 0.000270042, Train Loss: 0.142507568, Test MRAE: 0.193366423, Test RMSE: 0.028129779, Test PSNR: 19.005863190
2022-04-24 16:18:10 - Iter[117000], Epoch[000117], learning rate : 0.000268080, Train Loss: 0.142188013, Test MRAE: 0.219309017, Test RMSE: 0.030824291, Test PSNR: 19.017368317
2022-04-24 16:33:56 - Iter[118000], Epoch[000118], learning rate : 0.000266111, Train Loss: 0.141825795, Test MRAE: 0.205177620, Test RMSE: 0.028586203, Test PSNR: 18.947324753
2022-04-24 16:49:44 - Iter[119000], Epoch[000119], learning rate : 0.000264134, Train Loss: 0.141454026, Test MRAE: 0.213753939, Test RMSE: 0.030987954, Test PSNR: 19.093702316
2022-04-24 17:05:34 - Iter[120000], Epoch[000120], learning rate : 0.000262151, Train Loss: 0.141122550, Test MRAE: 0.205288440, Test RMSE: 0.029451758, Test PSNR: 19.051704407
2022-04-24 17:21:21 - Iter[121000], Epoch[000121], learning rate : 0.000260161, Train Loss: 0.141675979, Test MRAE: 0.213665485, Test RMSE: 0.029897889, Test PSNR: 18.968690872
2022-04-24 17:37:06 - Iter[122000], Epoch[000122], learning rate : 0.000258164, Train Loss: 0.141259611, Test MRAE: 0.203018293, Test RMSE: 0.029807677, Test PSNR: 19.059270859
2022-04-24 17:52:52 - Iter[123000], Epoch[000123], learning rate : 0.000256161, Train Loss: 0.140892401, Test MRAE: 0.205029026, Test RMSE: 0.029058052, Test PSNR: 19.114631653
2022-04-24 18:08:38 - Iter[124000], Epoch[000124], learning rate : 0.000254152, Train Loss: 0.140450522, Test MRAE: 0.198189601, Test RMSE: 0.028176619, Test PSNR: 18.899404526
2022-04-24 18:24:25 - Iter[125000], Epoch[000125], learning rate : 0.000252136, Train Loss: 0.140061647, Test MRAE: 0.227593854, Test RMSE: 0.032393444, Test PSNR: 19.117650986
2022-04-24 18:40:12 - Iter[126000], Epoch[000126], learning rate : 0.000250116, Train Loss: 0.139804602, Test MRAE: 0.214596003, Test RMSE: 0.030325968, Test PSNR: 19.085477829
2022-04-24 18:56:03 - Iter[127000], Epoch[000127], learning rate : 0.000248089, Train Loss: 0.139489800, Test MRAE: 0.219026044, Test RMSE: 0.030452706, Test PSNR: 18.957645416
2022-04-24 19:11:50 - Iter[128000], Epoch[000128], learning rate : 0.000246058, Train Loss: 0.139124364, Test MRAE: 0.220908672, Test RMSE: 0.031795617, Test PSNR: 19.092975616
2022-04-24 19:27:36 - Iter[129000], Epoch[000129], learning rate : 0.000244022, Train Loss: 0.138711065, Test MRAE: 0.223648518, Test RMSE: 0.030946231, Test PSNR: 19.023336411
2022-04-24 19:43:24 - Iter[130000], Epoch[000130], learning rate : 0.000241980, Train Loss: 0.138381705, Test MRAE: 0.199091196, Test RMSE: 0.028440719, Test PSNR: 18.978559494
2022-04-24 19:59:10 - Iter[131000], Epoch[000131], learning rate : 0.000239935, Train Loss: 0.138126105, Test MRAE: 0.200671941, Test RMSE: 0.028158125, Test PSNR: 18.979644775
2022-04-24 20:14:55 - Iter[132000], Epoch[000132], learning rate : 0.000237885, Train Loss: 0.137827173, Test MRAE: 0.203352720, Test RMSE: 0.028771115, Test PSNR: 18.875143051
2022-04-24 20:30:43 - Iter[133000], Epoch[000133], learning rate : 0.000235830, Train Loss: 0.137512103, Test MRAE: 0.251916736, Test RMSE: 0.035541732, Test PSNR: 19.041845322
2022-04-24 20:46:31 - Iter[134000], Epoch[000134], learning rate : 0.000233772, Train Loss: 0.137238741, Test MRAE: 0.229384869, Test RMSE: 0.032080282, Test PSNR: 18.928392410
2022-04-24 21:02:16 - Iter[135000], Epoch[000135], learning rate : 0.000231711, Train Loss: 0.136950210, Test MRAE: 0.230571747, Test RMSE: 0.033738092, Test PSNR: 19.063541412
2022-04-24 21:18:02 - Iter[136000], Epoch[000136], learning rate : 0.000229646, Train Loss: 0.136655480, Test MRAE: 0.210024893, Test RMSE: 0.028892893, Test PSNR: 18.938934326
2022-04-24 21:33:50 - Iter[137000], Epoch[000137], learning rate : 0.000227577, Train Loss: 0.136351779, Test MRAE: 0.199735105, Test RMSE: 0.027150583, Test PSNR: 18.880283356
2022-04-24 21:49:36 - Iter[138000], Epoch[000138], learning rate : 0.000225506, Train Loss: 0.136000752, Test MRAE: 0.213579893, Test RMSE: 0.030251976, Test PSNR: 19.084747314
2022-04-24 22:05:24 - Iter[139000], Epoch[000139], learning rate : 0.000223432, Train Loss: 0.135698289, Test MRAE: 0.202947915, Test RMSE: 0.027996786, Test PSNR: 19.060047150
2022-04-24 22:21:11 - Iter[140000], Epoch[000140], learning rate : 0.000221356, Train Loss: 0.135410920, Test MRAE: 0.204962358, Test RMSE: 0.028149297, Test PSNR: 19.065879822
2022-04-24 22:36:54 - Iter[141000], Epoch[000141], learning rate : 0.000219277, Train Loss: 0.135157645, Test MRAE: 0.203717798, Test RMSE: 0.029486291, Test PSNR: 19.107460022
2022-04-24 22:52:37 - Iter[142000], Epoch[000142], learning rate : 0.000217196, Train Loss: 0.134867549, Test MRAE: 0.224558994, Test RMSE: 0.031202393, Test PSNR: 19.067234039
2022-04-24 23:08:19 - Iter[143000], Epoch[000143], learning rate : 0.000215113, Train Loss: 0.134574100, Test MRAE: 0.211229146, Test RMSE: 0.031165324, Test PSNR: 19.120822906
2022-04-24 23:24:02 - Iter[144000], Epoch[000144], learning rate : 0.000213029, Train Loss: 0.134270564, Test MRAE: 0.215366766, Test RMSE: 0.030734645, Test PSNR: 19.096778870
2022-04-24 23:39:44 - Iter[145000], Epoch[000145], learning rate : 0.000210943, Train Loss: 0.134016097, Test MRAE: 0.198303372, Test RMSE: 0.027785815, Test PSNR: 19.032321930
2022-04-24 23:55:26 - Iter[146000], Epoch[000146], learning rate : 0.000208856, Train Loss: 0.133732110, Test MRAE: 0.196022749, Test RMSE: 0.029352559, Test PSNR: 19.135593414
2022-04-25 00:11:09 - Iter[147000], Epoch[000147], learning rate : 0.000206769, Train Loss: 0.133431599, Test MRAE: 0.209573299, Test RMSE: 0.029887328, Test PSNR: 19.095409393
2022-04-25 00:26:52 - Iter[148000], Epoch[000148], learning rate : 0.000204680, Train Loss: 0.133194283, Test MRAE: 0.208055094, Test RMSE: 0.028258048, Test PSNR: 18.880475998
2022-04-25 00:42:34 - Iter[149000], Epoch[000149], learning rate : 0.000202591, Train Loss: 0.132876396, Test MRAE: 0.223921135, Test RMSE: 0.030200578, Test PSNR: 18.990900040
2022-04-25 00:58:17 - Iter[150000], Epoch[000150], learning rate : 0.000200502, Train Loss: 0.132608861, Test MRAE: 0.214254171, Test RMSE: 0.028446879, Test PSNR: 18.850370407
2022-04-25 01:14:00 - Iter[151000], Epoch[000151], learning rate : 0.000198413, Train Loss: 0.132331938, Test MRAE: 0.213032275, Test RMSE: 0.028355932, Test PSNR: 18.995388031
2022-04-25 01:29:42 - Iter[152000], Epoch[000152], learning rate : 0.000196324, Train Loss: 0.132052571, Test MRAE: 0.209101513, Test RMSE: 0.029273044, Test PSNR: 19.021900177
2022-04-25 01:45:25 - Iter[153000], Epoch[000153], learning rate : 0.000194236, Train Loss: 0.131791100, Test MRAE: 0.210104674, Test RMSE: 0.029532449, Test PSNR: 19.057674408
2022-04-25 02:01:07 - Iter[154000], Epoch[000154], learning rate : 0.000192148, Train Loss: 0.131518036, Test MRAE: 0.212604702, Test RMSE: 0.028673176, Test PSNR: 18.848283768
2022-04-25 02:16:51 - Iter[155000], Epoch[000155], learning rate : 0.000190061, Train Loss: 0.131209671, Test MRAE: 0.210108310, Test RMSE: 0.030193273, Test PSNR: 18.993249893
2022-04-25 02:32:33 - Iter[156000], Epoch[000156], learning rate : 0.000187975, Train Loss: 0.130926698, Test MRAE: 0.214930877, Test RMSE: 0.029281715, Test PSNR: 18.853296280
2022-04-25 02:48:17 - Iter[157000], Epoch[000157], learning rate : 0.000185891, Train Loss: 0.130694464, Test MRAE: 0.211215034, Test RMSE: 0.029316388, Test PSNR: 18.940450668
2022-04-25 03:03:59 - Iter[158000], Epoch[000158], learning rate : 0.000183808, Train Loss: 0.130415007, Test MRAE: 0.202084705, Test RMSE: 0.028342213, Test PSNR: 19.029422760
2022-04-25 03:19:42 - Iter[159000], Epoch[000159], learning rate : 0.000181727, Train Loss: 0.130165070, Test MRAE: 0.200078711, Test RMSE: 0.028258955, Test PSNR: 19.013126373
2022-04-25 03:35:24 - Iter[160000], Epoch[000160], learning rate : 0.000179649, Train Loss: 0.129895121, Test MRAE: 0.201857880, Test RMSE: 0.027987808, Test PSNR: 19.015281677
2022-04-25 03:51:09 - Iter[161000], Epoch[000161], learning rate : 0.000177572, Train Loss: 0.129623845, Test MRAE: 0.219355986, Test RMSE: 0.030731371, Test PSNR: 19.083154678
2022-04-25 04:06:52 - Iter[162000], Epoch[000162], learning rate : 0.000175498, Train Loss: 0.129356831, Test MRAE: 0.206479296, Test RMSE: 0.028123399, Test PSNR: 19.018695831
2022-04-25 04:22:36 - Iter[163000], Epoch[000163], learning rate : 0.000173427, Train Loss: 0.129107624, Test MRAE: 0.198778838, Test RMSE: 0.028552517, Test PSNR: 19.013214111
2022-04-25 04:38:20 - Iter[164000], Epoch[000164], learning rate : 0.000171359, Train Loss: 0.128837064, Test MRAE: 0.195670083, Test RMSE: 0.027478158, Test PSNR: 18.992214203
2022-04-25 04:54:03 - Iter[165000], Epoch[000165], learning rate : 0.000169293, Train Loss: 0.128567725, Test MRAE: 0.211258903, Test RMSE: 0.029616732, Test PSNR: 19.097358704
2022-04-25 05:09:46 - Iter[166000], Epoch[000166], learning rate : 0.000167232, Train Loss: 0.128432900, Test MRAE: 0.191220835, Test RMSE: 0.027260069, Test PSNR: 18.941967010
2022-04-25 05:25:28 - Iter[167000], Epoch[000167], learning rate : 0.000165174, Train Loss: 0.128152385, Test MRAE: 0.202767581, Test RMSE: 0.028542725, Test PSNR: 18.998825073
2022-04-25 05:41:12 - Iter[168000], Epoch[000168], learning rate : 0.000163119, Train Loss: 0.127891243, Test MRAE: 0.204690695, Test RMSE: 0.028364403, Test PSNR: 18.970281601
2022-04-25 05:56:54 - Iter[169000], Epoch[000169], learning rate : 0.000161069, Train Loss: 0.127617165, Test MRAE: 0.203078985, Test RMSE: 0.028361408, Test PSNR: 19.034769058
2022-04-25 06:12:38 - Iter[170000], Epoch[000170], learning rate : 0.000159024, Train Loss: 0.127377793, Test MRAE: 0.206805512, Test RMSE: 0.030018816, Test PSNR: 19.061866760
2022-04-25 06:28:23 - Iter[171000], Epoch[000171], learning rate : 0.000156982, Train Loss: 0.127140224, Test MRAE: 0.208625391, Test RMSE: 0.028507199, Test PSNR: 18.989234924
2022-04-25 06:44:06 - Iter[172000], Epoch[000172], learning rate : 0.000154946, Train Loss: 0.126904130, Test MRAE: 0.210705116, Test RMSE: 0.029785754, Test PSNR: 18.994005203
2022-04-25 06:59:49 - Iter[173000], Epoch[000173], learning rate : 0.000152915, Train Loss: 0.126656905, Test MRAE: 0.210085228, Test RMSE: 0.029623386, Test PSNR: 19.080978394
2022-04-25 07:15:31 - Iter[174000], Epoch[000174], learning rate : 0.000150888, Train Loss: 0.126484647, Test MRAE: 0.215656236, Test RMSE: 0.030040557, Test PSNR: 19.056533813
2022-04-25 07:31:13 - Iter[175000], Epoch[000175], learning rate : 0.000148868, Train Loss: 0.126255050, Test MRAE: 0.216682300, Test RMSE: 0.029841734, Test PSNR: 18.989347458
2022-04-25 07:46:56 - Iter[176000], Epoch[000176], learning rate : 0.000146853, Train Loss: 0.126007512, Test MRAE: 0.229140550, Test RMSE: 0.031297620, Test PSNR: 18.960144043
2022-04-25 08:02:38 - Iter[177000], Epoch[000177], learning rate : 0.000144843, Train Loss: 0.125766575, Test MRAE: 0.218668729, Test RMSE: 0.031003775, Test PSNR: 19.038333893
2022-04-25 08:18:22 - Iter[178000], Epoch[000178], learning rate : 0.000142840, Train Loss: 0.125532001, Test MRAE: 0.199688122, Test RMSE: 0.028291211, Test PSNR: 18.974950790
2022-04-25 08:34:09 - Iter[179000], Epoch[000179], learning rate : 0.000140843, Train Loss: 0.125298381, Test MRAE: 0.207475007, Test RMSE: 0.029013749, Test PSNR: 18.985069275
2022-04-25 08:49:51 - Iter[180000], Epoch[000180], learning rate : 0.000138853, Train Loss: 0.125062704, Test MRAE: 0.206389904, Test RMSE: 0.029368754, Test PSNR: 19.005840302
2022-04-25 09:05:36 - Iter[181000], Epoch[000181], learning rate : 0.000136870, Train Loss: 0.124840498, Test MRAE: 0.210619673, Test RMSE: 0.029867401, Test PSNR: 18.961513519
2022-04-25 09:21:18 - Iter[182000], Epoch[000182], learning rate : 0.000134893, Train Loss: 0.124605544, Test MRAE: 0.220269203, Test RMSE: 0.032067895, Test PSNR: 19.168319702
2022-04-25 09:37:01 - Iter[183000], Epoch[000183], learning rate : 0.000132924, Train Loss: 0.124368042, Test MRAE: 0.191453904, Test RMSE: 0.027029432, Test PSNR: 19.038944244
2022-04-25 09:52:47 - Iter[184000], Epoch[000184], learning rate : 0.000130962, Train Loss: 0.124137081, Test MRAE: 0.206539020, Test RMSE: 0.029571155, Test PSNR: 19.048000336
2022-04-25 10:08:30 - Iter[185000], Epoch[000185], learning rate : 0.000129008, Train Loss: 0.123918340, Test MRAE: 0.194905445, Test RMSE: 0.027700884, Test PSNR: 19.005758286
2022-04-25 10:24:14 - Iter[186000], Epoch[000186], learning rate : 0.000127061, Train Loss: 0.123697750, Test MRAE: 0.205566064, Test RMSE: 0.028691338, Test PSNR: 18.983287811
2022-04-25 10:39:56 - Iter[187000], Epoch[000187], learning rate : 0.000125123, Train Loss: 0.123466983, Test MRAE: 0.201362506, Test RMSE: 0.028842332, Test PSNR: 19.055122375
2022-04-25 10:55:39 - Iter[188000], Epoch[000188], learning rate : 0.000123193, Train Loss: 0.123243488, Test MRAE: 0.206426546, Test RMSE: 0.029135758, Test PSNR: 19.006008148
2022-04-25 11:11:23 - Iter[189000], Epoch[000189], learning rate : 0.000121271, Train Loss: 0.123076245, Test MRAE: 0.207691565, Test RMSE: 0.029229913, Test PSNR: 19.043151855
2022-04-25 11:27:06 - Iter[190000], Epoch[000190], learning rate : 0.000119358, Train Loss: 0.122856230, Test MRAE: 0.206229493, Test RMSE: 0.028870497, Test PSNR: 18.966106415
2022-04-25 11:42:49 - Iter[191000], Epoch[000191], learning rate : 0.000117454, Train Loss: 0.122638315, Test MRAE: 0.205771387, Test RMSE: 0.028803570, Test PSNR: 18.946226120
2022-04-25 11:58:32 - Iter[192000], Epoch[000192], learning rate : 0.000115559, Train Loss: 0.122420080, Test MRAE: 0.212724239, Test RMSE: 0.030358646, Test PSNR: 19.106163025
2022-04-25 12:14:15 - Iter[193000], Epoch[000193], learning rate : 0.000113673, Train Loss: 0.122199051, Test MRAE: 0.214906707, Test RMSE: 0.030220853, Test PSNR: 19.064342499
2022-04-25 12:30:00 - Iter[194000], Epoch[000194], learning rate : 0.000111797, Train Loss: 0.121984355, Test MRAE: 0.198100254, Test RMSE: 0.028148143, Test PSNR: 18.971063614
2022-04-25 12:45:43 - Iter[195000], Epoch[000195], learning rate : 0.000109931, Train Loss: 0.121781416, Test MRAE: 0.198713332, Test RMSE: 0.028723657, Test PSNR: 19.015140533
2022-04-25 13:01:27 - Iter[196000], Epoch[000196], learning rate : 0.000108074, Train Loss: 0.121572427, Test MRAE: 0.197447866, Test RMSE: 0.028784798, Test PSNR: 18.987123489
2022-04-25 13:17:11 - Iter[197000], Epoch[000197], learning rate : 0.000106228, Train Loss: 0.121370003, Test MRAE: 0.205944434, Test RMSE: 0.029543681, Test PSNR: 19.049442291
2022-04-25 13:32:58 - Iter[198000], Epoch[000198], learning rate : 0.000104392, Train Loss: 0.101264954, Test MRAE: 0.202921927, Test RMSE: 0.029191626, Test PSNR: 19.073200226
2022-04-25 13:48:45 - Iter[199000], Epoch[000199], learning rate : 0.000102567, Train Loss: 0.099872775, Test MRAE: 0.213733122, Test RMSE: 0.030334827, Test PSNR: 19.099483490
2022-04-25 14:04:30 - Iter[200000], Epoch[000200], learning rate : 0.000100752, Train Loss: 0.099436894, Test MRAE: 0.207790688, Test RMSE: 0.029580811, Test PSNR: 19.065809250
2022-04-25 14:20:13 - Iter[201000], Epoch[000201], learning rate : 0.000098948, Train Loss: 0.099523008, Test MRAE: 0.206653327, Test RMSE: 0.029507691, Test PSNR: 19.007652283
2022-04-25 14:35:57 - Iter[202000], Epoch[000202], learning rate : 0.000097155, Train Loss: 0.099680349, Test MRAE: 0.209775746, Test RMSE: 0.029743711, Test PSNR: 19.063774109
2022-04-25 14:51:41 - Iter[203000], Epoch[000203], learning rate : 0.000095374, Train Loss: 0.099537373, Test MRAE: 0.205857038, Test RMSE: 0.029820710, Test PSNR: 19.054101944
2022-04-25 15:07:24 - Iter[204000], Epoch[000204], learning rate : 0.000093604, Train Loss: 0.099403314, Test MRAE: 0.201450929, Test RMSE: 0.028907372, Test PSNR: 18.933977127
2022-04-25 15:23:07 - Iter[205000], Epoch[000205], learning rate : 0.000091846, Train Loss: 0.099265657, Test MRAE: 0.202834055, Test RMSE: 0.029253015, Test PSNR: 18.995012283
2022-04-25 15:38:50 - Iter[206000], Epoch[000206], learning rate : 0.000090100, Train Loss: 0.099154606, Test MRAE: 0.200972140, Test RMSE: 0.028723257, Test PSNR: 18.992853165
2022-04-25 15:54:34 - Iter[207000], Epoch[000207], learning rate : 0.000088366, Train Loss: 0.099067487, Test MRAE: 0.206179917, Test RMSE: 0.029294010, Test PSNR: 18.984216690
2022-04-25 16:10:17 - Iter[208000], Epoch[000208], learning rate : 0.000086644, Train Loss: 0.098957688, Test MRAE: 0.201780394, Test RMSE: 0.028410414, Test PSNR: 19.005832672
2022-04-25 16:26:03 - Iter[209000], Epoch[000209], learning rate : 0.000084935, Train Loss: 0.098789133, Test MRAE: 0.209743783, Test RMSE: 0.029414069, Test PSNR: 18.970657349
2022-04-25 16:41:47 - Iter[210000], Epoch[000210], learning rate : 0.000083239, Train Loss: 0.098655112, Test MRAE: 0.200564787, Test RMSE: 0.028275695, Test PSNR: 19.033458710
2022-04-25 16:57:32 - Iter[211000], Epoch[000211], learning rate : 0.000081555, Train Loss: 0.098488487, Test MRAE: 0.199889153, Test RMSE: 0.028281061, Test PSNR: 18.950765610
2022-04-25 17:13:15 - Iter[212000], Epoch[000212], learning rate : 0.000079884, Train Loss: 0.098348401, Test MRAE: 0.200752750, Test RMSE: 0.027866315, Test PSNR: 18.983562469
2022-04-25 17:28:57 - Iter[213000], Epoch[000213], learning rate : 0.000078227, Train Loss: 0.098229684, Test MRAE: 0.201839328, Test RMSE: 0.028493920, Test PSNR: 18.957841873
2022-04-25 17:44:40 - Iter[214000], Epoch[000214], learning rate : 0.000076583, Train Loss: 0.098102383, Test MRAE: 0.204463542, Test RMSE: 0.028648939, Test PSNR: 18.996351242
2022-04-25 18:00:22 - Iter[215000], Epoch[000215], learning rate : 0.000074952, Train Loss: 0.097989276, Test MRAE: 0.201856405, Test RMSE: 0.028056156, Test PSNR: 18.929220200
2022-04-25 18:16:08 - Iter[216000], Epoch[000216], learning rate : 0.000073336, Train Loss: 0.097863868, Test MRAE: 0.210833490, Test RMSE: 0.030079007, Test PSNR: 19.039060593
2022-04-25 18:31:50 - Iter[217000], Epoch[000217], learning rate : 0.000071733, Train Loss: 0.097764373, Test MRAE: 0.199148744, Test RMSE: 0.027379682, Test PSNR: 18.915548325
2022-04-25 18:47:33 - Iter[218000], Epoch[000218], learning rate : 0.000070144, Train Loss: 0.097662948, Test MRAE: 0.211309999, Test RMSE: 0.029880499, Test PSNR: 19.041957855
2022-04-25 19:03:16 - Iter[219000], Epoch[000219], learning rate : 0.000068570, Train Loss: 0.097545773, Test MRAE: 0.202322707, Test RMSE: 0.029350601, Test PSNR: 19.032449722
2022-04-25 19:19:05 - Iter[220000], Epoch[000220], learning rate : 0.000067010, Train Loss: 0.097439609, Test MRAE: 0.201889187, Test RMSE: 0.028135814, Test PSNR: 19.002304077
2022-04-25 19:34:58 - Iter[221000], Epoch[000221], learning rate : 0.000065465, Train Loss: 0.097333550, Test MRAE: 0.205386743, Test RMSE: 0.029404126, Test PSNR: 19.006492615
2022-04-25 19:50:42 - Iter[222000], Epoch[000222], learning rate : 0.000063934, Train Loss: 0.097243309, Test MRAE: 0.200861678, Test RMSE: 0.028495271, Test PSNR: 19.029602051
2022-04-25 20:06:24 - Iter[223000], Epoch[000223], learning rate : 0.000062419, Train Loss: 0.097154871, Test MRAE: 0.209499523, Test RMSE: 0.029613551, Test PSNR: 19.025751114
2022-04-25 20:22:07 - Iter[224000], Epoch[000224], learning rate : 0.000060919, Train Loss: 0.097066604, Test MRAE: 0.216345757, Test RMSE: 0.030452432, Test PSNR: 19.079719543
2022-04-25 20:37:50 - Iter[225000], Epoch[000225], learning rate : 0.000059434, Train Loss: 0.096955277, Test MRAE: 0.207744464, Test RMSE: 0.029243071, Test PSNR: 19.021963120
2022-04-25 20:53:34 - Iter[226000], Epoch[000226], learning rate : 0.000057964, Train Loss: 0.096838258, Test MRAE: 0.199603081, Test RMSE: 0.028276462, Test PSNR: 19.053121567
2022-04-25 21:09:17 - Iter[227000], Epoch[000227], learning rate : 0.000056510, Train Loss: 0.096710235, Test MRAE: 0.197099164, Test RMSE: 0.027980030, Test PSNR: 18.989082336
2022-04-25 21:24:59 - Iter[228000], Epoch[000228], learning rate : 0.000055072, Train Loss: 0.096593060, Test MRAE: 0.192215458, Test RMSE: 0.027695557, Test PSNR: 19.051717758
2022-04-25 21:40:41 - Iter[229000], Epoch[000229], learning rate : 0.000053650, Train Loss: 0.096480407, Test MRAE: 0.194151208, Test RMSE: 0.028371762, Test PSNR: 19.065599442
2022-04-25 21:56:29 - Iter[230000], Epoch[000230], learning rate : 0.000052244, Train Loss: 0.096383542, Test MRAE: 0.203011096, Test RMSE: 0.029121868, Test PSNR: 19.086605072
2022-04-25 22:12:12 - Iter[231000], Epoch[000231], learning rate : 0.000050854, Train Loss: 0.096299224, Test MRAE: 0.201670945, Test RMSE: 0.028482547, Test PSNR: 19.051557541
2022-04-25 22:27:55 - Iter[232000], Epoch[000232], learning rate : 0.000049481, Train Loss: 0.096198440, Test MRAE: 0.196830481, Test RMSE: 0.028049719, Test PSNR: 19.039011002
2022-04-25 22:43:37 - Iter[233000], Epoch[000233], learning rate : 0.000048124, Train Loss: 0.096105792, Test MRAE: 0.194558725, Test RMSE: 0.027864750, Test PSNR: 19.071357727
2022-04-25 22:59:20 - Iter[234000], Epoch[000234], learning rate : 0.000046784, Train Loss: 0.096012853, Test MRAE: 0.200894669, Test RMSE: 0.028464397, Test PSNR: 19.031705856
2022-04-25 23:15:02 - Iter[235000], Epoch[000235], learning rate : 0.000045461, Train Loss: 0.095911391, Test MRAE: 0.203084469, Test RMSE: 0.029015776, Test PSNR: 19.042705536
2022-04-25 23:30:45 - Iter[236000], Epoch[000236], learning rate : 0.000044154, Train Loss: 0.095807374, Test MRAE: 0.205492422, Test RMSE: 0.029720480, Test PSNR: 19.064033508
2022-04-25 23:46:27 - Iter[237000], Epoch[000237], learning rate : 0.000042865, Train Loss: 0.095718473, Test MRAE: 0.200766295, Test RMSE: 0.029130232, Test PSNR: 19.070289612
2022-04-26 00:02:16 - Iter[238000], Epoch[000238], learning rate : 0.000041594, Train Loss: 0.095622800, Test MRAE: 0.201171368, Test RMSE: 0.028642515, Test PSNR: 19.076307297
2022-04-26 00:17:58 - Iter[239000], Epoch[000239], learning rate : 0.000040339, Train Loss: 0.095527329, Test MRAE: 0.207034603, Test RMSE: 0.030098038, Test PSNR: 19.122308731
2022-04-26 00:33:41 - Iter[240000], Epoch[000240], learning rate : 0.000039102, Train Loss: 0.095431149, Test MRAE: 0.200998485, Test RMSE: 0.028509894, Test PSNR: 19.048767090
2022-04-26 00:49:25 - Iter[241000], Epoch[000241], learning rate : 0.000037883, Train Loss: 0.095341481, Test MRAE: 0.197351202, Test RMSE: 0.027994553, Test PSNR: 19.053705215
2022-04-26 01:05:07 - Iter[242000], Epoch[000242], learning rate : 0.000036682, Train Loss: 0.095249861, Test MRAE: 0.205203146, Test RMSE: 0.029421324, Test PSNR: 19.068685532
2022-04-26 01:20:51 - Iter[243000], Epoch[000243], learning rate : 0.000035499, Train Loss: 0.095152855, Test MRAE: 0.202429041, Test RMSE: 0.028778778, Test PSNR: 19.062547684
2022-04-26 01:36:39 - Iter[244000], Epoch[000244], learning rate : 0.000034333, Train Loss: 0.095076188, Test MRAE: 0.199525461, Test RMSE: 0.028577324, Test PSNR: 19.065444946
2022-04-26 01:52:28 - Iter[245000], Epoch[000245], learning rate : 0.000033186, Train Loss: 0.094988756, Test MRAE: 0.193906844, Test RMSE: 0.027360469, Test PSNR: 19.035274506
2022-04-26 02:08:13 - Iter[246000], Epoch[000246], learning rate : 0.000032058, Train Loss: 0.094907209, Test MRAE: 0.194509611, Test RMSE: 0.027639085, Test PSNR: 19.038093567
2022-04-26 02:23:58 - Iter[247000], Epoch[000247], learning rate : 0.000030948, Train Loss: 0.094821684, Test MRAE: 0.205400094, Test RMSE: 0.029638980, Test PSNR: 19.069122314
2022-04-26 02:39:41 - Iter[248000], Epoch[000248], learning rate : 0.000029856, Train Loss: 0.094757117, Test MRAE: 0.200400651, Test RMSE: 0.028777106, Test PSNR: 19.065080643
2022-04-26 02:55:26 - Iter[249000], Epoch[000249], learning rate : 0.000028783, Train Loss: 0.094666235, Test MRAE: 0.196040452, Test RMSE: 0.027733754, Test PSNR: 19.004329681
2022-04-26 03:11:09 - Iter[250000], Epoch[000250], learning rate : 0.000027729, Train Loss: 0.094580248, Test MRAE: 0.198402479, Test RMSE: 0.028335137, Test PSNR: 19.012487411
2022-04-26 03:26:51 - Iter[251000], Epoch[000251], learning rate : 0.000026694, Train Loss: 0.094503880, Test MRAE: 0.198768482, Test RMSE: 0.027874328, Test PSNR: 18.994089127
2022-04-26 03:42:33 - Iter[252000], Epoch[000252], learning rate : 0.000025678, Train Loss: 0.094424129, Test MRAE: 0.204637840, Test RMSE: 0.028599529, Test PSNR: 19.048522949
2022-04-26 03:58:17 - Iter[253000], Epoch[000253], learning rate : 0.000024681, Train Loss: 0.094356239, Test MRAE: 0.197843701, Test RMSE: 0.027720425, Test PSNR: 19.019216537
2022-04-26 04:13:59 - Iter[254000], Epoch[000254], learning rate : 0.000023703, Train Loss: 0.094279803, Test MRAE: 0.195381641, Test RMSE: 0.027696660, Test PSNR: 19.030433655
2022-04-26 04:29:44 - Iter[255000], Epoch[000255], learning rate : 0.000022745, Train Loss: 0.094200850, Test MRAE: 0.201875895, Test RMSE: 0.028370189, Test PSNR: 19.016584396
2022-04-26 04:45:26 - Iter[256000], Epoch[000256], learning rate : 0.000021806, Train Loss: 0.094126128, Test MRAE: 0.203219756, Test RMSE: 0.028974859, Test PSNR: 19.087203979
2022-04-26 05:01:09 - Iter[257000], Epoch[000257], learning rate : 0.000020887, Train Loss: 0.094045103, Test MRAE: 0.194824770, Test RMSE: 0.027946815, Test PSNR: 19.061742783
2022-04-26 05:16:51 - Iter[258000], Epoch[000258], learning rate : 0.000019988, Train Loss: 0.093975663, Test MRAE: 0.198491260, Test RMSE: 0.027982576, Test PSNR: 19.010288239
2022-04-26 05:32:35 - Iter[259000], Epoch[000259], learning rate : 0.000019108, Train Loss: 0.093899436, Test MRAE: 0.201042473, Test RMSE: 0.028101049, Test PSNR: 19.056558609
2022-04-26 05:48:19 - Iter[260000], Epoch[000260], learning rate : 0.000018249, Train Loss: 0.093830153, Test MRAE: 0.201697826, Test RMSE: 0.028358519, Test PSNR: 19.034065247
2022-04-26 06:04:01 - Iter[261000], Epoch[000261], learning rate : 0.000017409, Train Loss: 0.093763351, Test MRAE: 0.200538099, Test RMSE: 0.028340435, Test PSNR: 19.020803452
2022-04-26 06:19:44 - Iter[262000], Epoch[000262], learning rate : 0.000016589, Train Loss: 0.093695477, Test MRAE: 0.202214047, Test RMSE: 0.028496150, Test PSNR: 19.040538788
2022-04-26 06:35:27 - Iter[263000], Epoch[000263], learning rate : 0.000015790, Train Loss: 0.093628220, Test MRAE: 0.199471787, Test RMSE: 0.028235294, Test PSNR: 19.043592453
2022-04-26 06:51:13 - Iter[264000], Epoch[000264], learning rate : 0.000015010, Train Loss: 0.093556948, Test MRAE: 0.200963169, Test RMSE: 0.028326141, Test PSNR: 19.030656815
2022-04-26 07:07:01 - Iter[265000], Epoch[000265], learning rate : 0.000014251, Train Loss: 0.093489110, Test MRAE: 0.197174221, Test RMSE: 0.027766764, Test PSNR: 19.029552460
2022-04-26 07:22:46 - Iter[266000], Epoch[000266], learning rate : 0.000013513, Train Loss: 0.093422472, Test MRAE: 0.198560372, Test RMSE: 0.027845098, Test PSNR: 19.010746002
2022-04-26 07:38:30 - Iter[267000], Epoch[000267], learning rate : 0.000012795, Train Loss: 0.093358673, Test MRAE: 0.198623791, Test RMSE: 0.028192090, Test PSNR: 19.046485901
2022-04-26 07:54:13 - Iter[268000], Epoch[000268], learning rate : 0.000012098, Train Loss: 0.093292192, Test MRAE: 0.197530746, Test RMSE: 0.028094612, Test PSNR: 19.029310226
2022-04-26 08:09:59 - Iter[269000], Epoch[000269], learning rate : 0.000011421, Train Loss: 0.093230203, Test MRAE: 0.199864402, Test RMSE: 0.028338477, Test PSNR: 19.030046463
2022-04-26 08:25:42 - Iter[270000], Epoch[000270], learning rate : 0.000010765, Train Loss: 0.093164317, Test MRAE: 0.196141958, Test RMSE: 0.027876040, Test PSNR: 19.026119232
2022-04-26 08:41:25 - Iter[271000], Epoch[000271], learning rate : 0.000010130, Train Loss: 0.093100064, Test MRAE: 0.199385583, Test RMSE: 0.027962262, Test PSNR: 19.018707275
2022-04-26 08:57:08 - Iter[272000], Epoch[000272], learning rate : 0.000009515, Train Loss: 0.093038529, Test MRAE: 0.203236878, Test RMSE: 0.028776500, Test PSNR: 19.067478180
2022-04-26 09:12:51 - Iter[273000], Epoch[000273], learning rate : 0.000008922, Train Loss: 0.092979573, Test MRAE: 0.201909021, Test RMSE: 0.028371701, Test PSNR: 19.025129318
2022-04-26 09:28:33 - Iter[274000], Epoch[000274], learning rate : 0.000008350, Train Loss: 0.092915766, Test MRAE: 0.199700192, Test RMSE: 0.028185047, Test PSNR: 19.029628754
2022-04-26 09:44:18 - Iter[275000], Epoch[000275], learning rate : 0.000007798, Train Loss: 0.092863925, Test MRAE: 0.199149221, Test RMSE: 0.028089032, Test PSNR: 19.003961563
2022-04-26 10:00:00 - Iter[276000], Epoch[000276], learning rate : 0.000007268, Train Loss: 0.092805415, Test MRAE: 0.196530923, Test RMSE: 0.027886262, Test PSNR: 19.012132645
2022-04-26 10:15:43 - Iter[277000], Epoch[000277], learning rate : 0.000006759, Train Loss: 0.092753656, Test MRAE: 0.197557405, Test RMSE: 0.028128177, Test PSNR: 19.038337708
2022-04-26 10:31:26 - Iter[278000], Epoch[000278], learning rate : 0.000006271, Train Loss: 0.092694871, Test MRAE: 0.197338894, Test RMSE: 0.027873993, Test PSNR: 19.016971588
2022-04-26 10:47:11 - Iter[279000], Epoch[000279], learning rate : 0.000005805, Train Loss: 0.092640802, Test MRAE: 0.199877754, Test RMSE: 0.028381795, Test PSNR: 19.056432724
2022-04-26 11:02:55 - Iter[280000], Epoch[000280], learning rate : 0.000005360, Train Loss: 0.092587359, Test MRAE: 0.198997468, Test RMSE: 0.028246677, Test PSNR: 19.035556793
2022-04-26 11:18:37 - Iter[281000], Epoch[000281], learning rate : 0.000004936, Train Loss: 0.092534050, Test MRAE: 0.198044509, Test RMSE: 0.027995434, Test PSNR: 19.035224915
2022-04-26 11:34:25 - Iter[282000], Epoch[000282], learning rate : 0.000004534, Train Loss: 0.092481837, Test MRAE: 0.199215770, Test RMSE: 0.028157072, Test PSNR: 19.028303146
2022-04-26 11:50:09 - Iter[283000], Epoch[000283], learning rate : 0.000004153, Train Loss: 0.092428215, Test MRAE: 0.199768141, Test RMSE: 0.028234014, Test PSNR: 19.032674789
2022-04-26 12:05:52 - Iter[284000], Epoch[000284], learning rate : 0.000003794, Train Loss: 0.092372514, Test MRAE: 0.200132683, Test RMSE: 0.028386112, Test PSNR: 19.050033569
2022-04-26 12:21:34 - Iter[285000], Epoch[000285], learning rate : 0.000003457, Train Loss: 0.092325777, Test MRAE: 0.200290814, Test RMSE: 0.028357379, Test PSNR: 19.042633057
2022-04-26 12:37:16 - Iter[286000], Epoch[000286], learning rate : 0.000003140, Train Loss: 0.092276767, Test MRAE: 0.199622378, Test RMSE: 0.028264590, Test PSNR: 19.027994156
2022-04-26 12:52:59 - Iter[287000], Epoch[000287], learning rate : 0.000002846, Train Loss: 0.092229761, Test MRAE: 0.200462580, Test RMSE: 0.028426396, Test PSNR: 19.040649414
2022-04-26 13:08:45 - Iter[288000], Epoch[000288], learning rate : 0.000002573, Train Loss: 0.092182025, Test MRAE: 0.199467480, Test RMSE: 0.028248770, Test PSNR: 19.024812698
2022-04-26 13:24:28 - Iter[289000], Epoch[000289], learning rate : 0.000002322, Train Loss: 0.092134498, Test MRAE: 0.199127629, Test RMSE: 0.028083034, Test PSNR: 19.012405396
2022-04-26 13:40:10 - Iter[290000], Epoch[000290], learning rate : 0.000002093, Train Loss: 0.092089295, Test MRAE: 0.200336218, Test RMSE: 0.028368756, Test PSNR: 19.028881073
2022-04-26 13:55:54 - Iter[291000], Epoch[000291], learning rate : 0.000001886, Train Loss: 0.092047319, Test MRAE: 0.198826462, Test RMSE: 0.028130181, Test PSNR: 19.023952484
2022-04-26 14:11:39 - Iter[292000], Epoch[000292], learning rate : 0.000001700, Train Loss: 0.092004254, Test MRAE: 0.200323239, Test RMSE: 0.028369704, Test PSNR: 19.032167435
2022-04-26 14:27:22 - Iter[293000], Epoch[000293], learning rate : 0.000001536, Train Loss: 0.091962963, Test MRAE: 0.198782578, Test RMSE: 0.028092362, Test PSNR: 19.022401810
2022-04-26 14:43:07 - Iter[294000], Epoch[000294], learning rate : 0.000001394, Train Loss: 0.091920927, Test MRAE: 0.199555337, Test RMSE: 0.028241893, Test PSNR: 19.032341003
2022-04-26 14:58:52 - Iter[295000], Epoch[000295], learning rate : 0.000001274, Train Loss: 0.091873795, Test MRAE: 0.200663373, Test RMSE: 0.028400686, Test PSNR: 19.039638519
2022-04-26 15:14:36 - Iter[296000], Epoch[000296], learning rate : 0.000001175, Train Loss: 0.091835335, Test MRAE: 0.199872047, Test RMSE: 0.028326735, Test PSNR: 19.036495209
2022-04-26 15:30:21 - Iter[297000], Epoch[000297], learning rate : 0.000001099, Train Loss: 0.087091446, Test MRAE: 0.200016499, Test RMSE: 0.028326962, Test PSNR: 19.033159256
2022-04-26 15:46:09 - Iter[298000], Epoch[000298], learning rate : 0.000001044, Train Loss: 0.087477118, Test MRAE: 0.199742630, Test RMSE: 0.028276479, Test PSNR: 19.029752731
2022-04-26 16:01:52 - Iter[299000], Epoch[000299], learning rate : 0.000001011, Train Loss: 0.087522693, Test MRAE: 0.200146362, Test RMSE: 0.028322442, Test PSNR: 19.031415939
2022-04-26 16:17:34 - Iter[300000], Epoch[000300], learning rate : 0.000001000, Train Loss: 0.087497599, Test MRAE: 0.200221285, Test RMSE: 0.028348781, Test PSNR: 19.032201767
2022-04-26 16:33:17 - Iter[301000], Epoch[000301], learning rate : 0.000001011, Train Loss: 0.087575421, Test MRAE: 0.200616375, Test RMSE: 0.028419724, Test PSNR: 19.031145096
2022-04-26 16:49:01 - Iter[302000], Epoch[000302], learning rate : 0.000001044, Train Loss: 0.087664612, Test MRAE: 0.199603871, Test RMSE: 0.028222578, Test PSNR: 19.027841568
2022-04-26 17:04:44 - Iter[303000], Epoch[000303], learning rate : 0.000001098, Train Loss: 0.087709896, Test MRAE: 0.199289769, Test RMSE: 0.028206011, Test PSNR: 19.033077240
2022-04-26 17:20:27 - Iter[304000], Epoch[000304], learning rate : 0.000001175, Train Loss: 0.087701336, Test MRAE: 0.198439360, Test RMSE: 0.028052971, Test PSNR: 19.023553848
2022-04-26 17:36:09 - Iter[305000], Epoch[000305], learning rate : 0.000001273, Train Loss: 0.087724887, Test MRAE: 0.199188337, Test RMSE: 0.028147060, Test PSNR: 19.024749756
2022-04-26 17:51:52 - Iter[306000], Epoch[000306], learning rate : 0.000001394, Train Loss: 0.087697797, Test MRAE: 0.198744774, Test RMSE: 0.028123770, Test PSNR: 19.021911621
2022-04-26 18:07:40 - Iter[307000], Epoch[000307], learning rate : 0.000001536, Train Loss: 0.087724082, Test MRAE: 0.198784173, Test RMSE: 0.028199598, Test PSNR: 19.027591705
2022-04-26 18:23:26 - Iter[308000], Epoch[000308], learning rate : 0.000001699, Train Loss: 0.087754786, Test MRAE: 0.200074822, Test RMSE: 0.028291941, Test PSNR: 19.026384354
2022-04-26 18:39:09 - Iter[309000], Epoch[000309], learning rate : 0.000001885, Train Loss: 0.087743573, Test MRAE: 0.199570522, Test RMSE: 0.028250307, Test PSNR: 19.019496918
2022-04-26 18:54:53 - Iter[310000], Epoch[000310], learning rate : 0.000002093, Train Loss: 0.087756194, Test MRAE: 0.200466946, Test RMSE: 0.028466217, Test PSNR: 19.043897629
2022-04-26 19:10:36 - Iter[311000], Epoch[000311], learning rate : 0.000002322, Train Loss: 0.087725841, Test MRAE: 0.200274870, Test RMSE: 0.028353559, Test PSNR: 19.034431458
2022-04-26 19:26:18 - Iter[312000], Epoch[000312], learning rate : 0.000002573, Train Loss: 0.087727122, Test MRAE: 0.199417949, Test RMSE: 0.028240953, Test PSNR: 19.032062531
2022-04-26 19:42:00 - Iter[313000], Epoch[000313], learning rate : 0.000002846, Train Loss: 0.087730475, Test MRAE: 0.200596809, Test RMSE: 0.028354665, Test PSNR: 19.033941269
2022-04-26 19:57:43 - Iter[314000], Epoch[000314], learning rate : 0.000003140, Train Loss: 0.087749563, Test MRAE: 0.201247305, Test RMSE: 0.028494956, Test PSNR: 19.038160324
2022-04-26 20:13:26 - Iter[315000], Epoch[000315], learning rate : 0.000003456, Train Loss: 0.087754004, Test MRAE: 0.200625256, Test RMSE: 0.028311061, Test PSNR: 19.022411346
2022-04-26 20:29:09 - Iter[316000], Epoch[000316], learning rate : 0.000003793, Train Loss: 0.087767355, Test MRAE: 0.199434310, Test RMSE: 0.028191015, Test PSNR: 19.015176773
2022-04-26 20:44:59 - Iter[317000], Epoch[000317], learning rate : 0.000004153, Train Loss: 0.087786980, Test MRAE: 0.201855198, Test RMSE: 0.028614521, Test PSNR: 19.041559219
2022-04-26 21:00:41 - Iter[318000], Epoch[000318], learning rate : 0.000004533, Train Loss: 0.087780491, Test MRAE: 0.200222835, Test RMSE: 0.028176807, Test PSNR: 19.015808105
2022-04-26 21:16:24 - Iter[319000], Epoch[000319], learning rate : 0.000004935, Train Loss: 0.087792754, Test MRAE: 0.199134097, Test RMSE: 0.028215753, Test PSNR: 19.028671265
2022-04-26 21:32:06 - Iter[320000], Epoch[000320], learning rate : 0.000005359, Train Loss: 0.087798759, Test MRAE: 0.200270444, Test RMSE: 0.028273745, Test PSNR: 19.037963867
2022-04-26 21:47:48 - Iter[321000], Epoch[000321], learning rate : 0.000005804, Train Loss: 0.087813973, Test MRAE: 0.200781301, Test RMSE: 0.028401980, Test PSNR: 19.028430939
2022-04-26 22:03:33 - Iter[322000], Epoch[000322], learning rate : 0.000006271, Train Loss: 0.087820567, Test MRAE: 0.199449122, Test RMSE: 0.028095623, Test PSNR: 19.015094757
2022-04-26 22:19:15 - Iter[323000], Epoch[000323], learning rate : 0.000006758, Train Loss: 0.087827265, Test MRAE: 0.201382726, Test RMSE: 0.028672023, Test PSNR: 19.032772064
2022-04-26 22:34:57 - Iter[324000], Epoch[000324], learning rate : 0.000007267, Train Loss: 0.087820075, Test MRAE: 0.199561149, Test RMSE: 0.028316684, Test PSNR: 19.027658463
2022-04-26 22:50:42 - Iter[325000], Epoch[000325], learning rate : 0.000007797, Train Loss: 0.087809280, Test MRAE: 0.202198595, Test RMSE: 0.028633879, Test PSNR: 19.034700394
2022-04-26 23:06:24 - Iter[326000], Epoch[000326], learning rate : 0.000008349, Train Loss: 0.087817691, Test MRAE:

Training is not starting

Hello, sir!

First of all, thank you for your contributions.

I downloaded the repository and the dataset as mentioned. Then, when trying to train with the command:

python train.py --method mst_plus_plus  --batch_size 20 --end_epoch 300 --init_lr 4e-4 --outf ./exp/mst_plus_plus/ --data_root ../dataset/  --patch_size 128 --stride 8  --gpu_id 0

image

It starts loading the data, but once done, it keeps loading it again and again until the entire memory is consumed and it breaks. The training is not starting; it is continuously loading data in a loop!

image

as you can see above, it is loading it again after finishing it once.

Can you please help me out with this? I have been facing this issue for a while now. It could probably be something that I am doing wrong.

Thanks

RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED in colab

load model from /content/mst_plus_plus.pth Reconstructing /content/MST-plus-plus/predict_code/demo/ARAD_1K_0912.jpg Traceback (most recent call last): File "/content/MST-plus-plus/predict_code/test.py", line 84, in <module> main() File "/content/MST-plus-plus/predict_code/test.py", line 28, in main test(model, opt.rgb_path, opt.outf) File "/content/MST-plus-plus/predict_code/test.py", line 40, in test result = forward_ensemble(rgb, model, opt.ensemble_mode) File "/content/MST-plus-plus/predict_code/test.py", line 74, in forward_ensemble data = forward_func(data) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/content/MST-plus-plus/predict_code/architecture/MST_Plus_Plus.py", line 289, in forward x = self.conv_in(x) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/conv.py", line 399, in forward return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/conv.py", line 395, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

Hi,
I find your project very interesting and plan to adapt it for the provision of less expensive multispectral images for farmers in developing countries based on RGB images from recreational drones.
Thank you for sharing resources.

We come to my problem. I tried to use the model on colab to make the demo not having a graphics card on my machine but I ran into this problem. Is there a way to set it to run on colab?

预测值评价

您好,我运行了predict中文件test文件生成了光谱数据,没有找到如何评价重建的光谱数据相关代码,再次向您请教一下,祝您科研顺利~感谢!

HDNet code

It seems that the spatial_attention and spectral_attention(SDL_attention) code implementation of the HDNet is contrary to what you written in the paper

σj Parameter Usage

I am writing to seek clarification on the implementation of the σj parameter as described in your paper. The paper mentions that σj is used to adapt the multiplication of K (key) and Q (query) matrices and suggests that this adaptation occurs for each wavelength.

image

However, upon reviewing the code, I noticed that the implementation uses 1 and 2 heads for the respective calculations

self.rescale = nn.Parameter(torch.ones(heads, 1, 1))

attn = attn * self.rescale

and the σj is the same for all wavelengths in a head

I am trying to reconcile the implementation details with the description provided in the paper, and any additional insight you could provide would be greatly appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.