Giter Site home page Giter Site logo

cszn / scunet Goto Github PK

View Code? Open in Web Editor NEW
618.0 18.0 63.0 40.46 MB

Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis (Machine Intelligence Research 2023)

Home Page: https://link.springer.com/article/10.1007/s11633-023-1466-0

License: Apache License 2.0

Python 100.00%
degradation-model image-denoising real-world-image-denoising blind-image-denoising practical-image-denoising

scunet's Introduction

Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis

visitors

[ArXiv Paper] [Online Demo] [Published Paper]

The following results are obtained by our SCUNet with purely synthetic training data! We did not use the paired noisy/clean data by DND and SIDD during training!

Swin-Conv-UNet (SCUNet) denoising network

The architecture of the proposed Swin-Conv-UNet (SCUNet) denoising network. SCUNet exploits the swin-conv (SC) block as the main building block of a UNet backbone. In each SC block, the input is first passed through a 1×1 convolution, and subsequently is split evenly into two feature map groups, each of which is then fed into a swin transformer (SwinT) block and residual 3×3 convolutional (RConv) block, respectively; after that, the outputs of SwinT block and RConv block are concatenated and then passed through a 1×1 convolution to produce the residual of the input. “SConv” and “TConv” denote 2×2 strided convolution with stride 2 and 2×2 transposed convolution with stride 2, respectively.

New data synthesis pipeline for real image denoising

Schematic illustration of the proposed paired training patches synthesis pipeline. For a high quality image, a randomly shuffled degradation sequence is performed to produce a noisy image. Meanwhile, the resizing and reverse-forward tone mapping are performed to produce a corresponding clean image. A paired noisy/clean training patches are then cropped for training deep blind denoising model. Note that, since Poisson noise is signal-dependent, the dashed arrow for “Poisson” means the clean image is used to generate the Poisson noise. To tackle with the color shift issue, the dashed arrow for “Camera Sensor” means the reverse-forward tone mapping is performed on the clean image.

Synthesized noisy/clean patch pairs via our proposed training data synthesis pipeline. The size of the high quality image patch is 544×544. The size of the noisy/clean patches is 128×128.

Web Demo

Try Replicate web demo for SCUNet models here Replicate

Codes

  1. Download SCUNet models
python main_download_pretrained_models.py --models "SCUNet" --model_dir "model_zoo"
  1. Gaussian denoising

    1. grayscale images
    python main_test_scunet_gray_gaussian.py --model_name scunet_gray_25 --noise_level_img 25 --testset_name set12
    1. color images
    python main_test_scunet_color_gaussian.py --model_name scunet_color_25 --noise_level_img 25 --testset_name bsd68
  2. Blind real image denoising

    python main_test_scunet_real_application.py --model_name scunet_color_real_psnr --testset_name real3
    python main_test_scunet_real_application.py --model_name scunet_color_real_gan --testset_name real3

Results on Gaussian denoising

Results on real image denoising

@article{zhang2023practical,
   author = {Zhang, Kai and Li, Yawei and Liang, Jingyun and Cao, Jiezhang and Zhang, Yulun and Tang, Hao and Fan, Deng-Ping and Timofte, Radu and Gool, Luc Van},
   title = {Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis},
   journal = {Machine Intelligence Research},
   DOI = {10.1007/s11633-023-1466-0},
   url = {https://doi.org/10.1007/s11633-023-1466-0},
   volume={20},
   number={6},
   pages={822--836},
   year={2023},
   publisher={Springer}
}

scunet's People

Contributors

chenxwh avatar cszn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scunet's Issues

Demo and pre-trained models on Hugging Face Spaces

Hello! Your demo on Replicate is extremely cool ✨ Would you be open to adding a demo on Hugging Face Spaces as well? We've got docs for doing that , but I'd be happy to help out with it!

If you'd like, you can also mirror your model weights over on the Hugging Face model hub as well, which I know our users would love 🤗 People have been building all kinds of cool things with the models on the Hub these days, like this demo that combines 3 different models 👀

scunet_color_real_gan could even be uploaded as a private or gated model, if that's something you're interested in.

cc: @osanseviero

Control sigma for scunet_color_real_psnr?

Results look very good, better than the "scunet_color" at times. But it would be nice to control the noise reduction like the grayscale and color ones.

Thank you.

Will there be more models?

Very impressed with the results! However at times even the lowest noise reduction "15" model can remove too much detail. Will there be a "5" or "10" noise level model in the future?

Thank you for all the hard work!

Data range is clipped for 8 bit images with noise?

Hi
I am using 8-bit greyscale as input, and note the usual /255 operations to put data in the range 0-1. However the operation

        if args.need_degradation:  # degradation process
            np.random.seed(seed=0)  # for reproducibility
            img_L += np.random.normal(0, args.noise_level_img/255., img_L.shape)

can result in values outside this range, as shown in the image below:
image

I note that the values are later clipped by util.single2uint so what is saved is not precisely what is processed. Furthermore I wonder the effect on inferencing - presumably values are also clipped to 0-1 range?

image_net

When will the pre-training weights of imagenet be announced?

Training time of SCUNet

you have mention in paper that

It takes about three days to train a denoising model on four NVIDIA RTX 2080 Ti GPUs.

however, in my case, it takes about 5 days to train on 4 V100 GPUs for 1200000 iterations and pics are crop into 544*544. It is much longer than 3 days. How can i speed up? thanks.

边缘异常

ReplicationPad2d 改为ReplicationPad2d可以避免部分case的边缘异常

Fantastic project, great results, but...

  File "C:\Users\Miki\SCUNet\models\network_scunet.py", line 88, in forward
    if self.type!='W': output = torch.roll(output, shifts=(self.window_size//2, self.window_size//2), dims=(1,2))
RuntimeError: CUDA out of memory. Tried to allocate 486.00 MiB (GPU 0; 12.00 GiB total capacity; 10.76 GiB already allocated; 0 bytes free; 11.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@cszn Please, add minor changes to code, to support --tile.... in before opened issue, I asked for same question, followed your instructions, uncommented line # number 99th but same problem appears again....
No way to denoise photos larger than 3 Mpx...due cuda oom errors. It is pain doing in graphic editor manualy cropping & after denoise joinning big photo gallery
I kindly ask for help... Cause this project has realy great potencial. Onothers like swinir, restormer etc... doesnt do joob like your, in terms on real denoising.... tnx very much in advance

training code

Hi, when will the training code be released?
What is the size of your training image?

train

Hello, can you release the training code? Thank you!

the issue of train iterations in your paper

Dear author:
In your paper, The learning rate decays by a factor of 0.5 every 200,000 iterations.
I want to confirm if it's 200,000 iterations, because in my opinion, 200,000 iterations is a very large number.
The learning rate starts from 1e-4 and decays by a factor of 0.5 every 200,000 iterations and finally ends with 3.125e-6. Thus, a total of 100,000,0 iterations are required, right?

Training code

Thanks for the code sharing. Any idea on how to train the model on our data?

Question: Any chance of SCUNet v2?

SCUNet does a GREAT job. But sometimes some details are lost. Is there any chance of a v2 in the future that might preserve or restore the details while still cleaning?

Error in converting to ONNX model

I am getting the following error, while trying to convert the pre-trained model to ONNX model. Can you please look into it and let me know that the pre-trained weights were generated using the current updated model? Conversion code is provided after the error.

Block Initial Type: W, drop_path_rate:0.000000
Block Initial Type: SW, drop_path_rate:0.000000
Block Initial Type: W, drop_path_rate:0.000000
Block Initial Type: SW, drop_path_rate:0.000000
Block Initial Type: W, drop_path_rate:0.000000
Block Initial Type: SW, drop_path_rate:0.000000
Block Initial Type: W, drop_path_rate:0.000000
Block Initial Type: SW, drop_path_rate:0.000000
Block Initial Type: W, drop_path_rate:0.000000
Block Initial Type: SW, drop_path_rate:0.000000
Block Initial Type: W, drop_path_rate:0.000000
Block Initial Type: SW, drop_path_rate:0.000000
Block Initial Type: W, drop_path_rate:0.000000
Block Initial Type: SW, drop_path_rate:0.000000
Traceback (most recent call last):
File "/home/ayaz_khan/SCUNet/onnx.py", line 2, in
import torch.onnx
File "/home/ayaz_khan/.local/lib/python3.9/site-packages/torch/onnx/init.py", line 57, in
from ._internal.onnxruntime import (
File "/home/ayaz_khan/.local/lib/python3.9/site-packages/torch/onnx/_internal/onnxruntime.py", line 34, in
import onnx
File "/home/ayaz_khan/SCUNet/onnx.py", line 25, in
convert_to_onnx(model_path, onnx_path)
File "/home/ayaz_khan/SCUNet/onnx.py", line 8, in convert_to_onnx
model.load_state_dict(torch.load(model_path))
File "/home/ayaz_khan/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SCUNet:
Missing key(s) in state_dict: "m_down1.2.weight", "m_down2.2.weight", "m_down3.2.weight".
Unexpected key(s) in state_dict: "m_down1.3.trans_block.ln1.weight", "m_down1.3.trans_block.ln1.bias", "m_down1.3.trans_block.msa.relative_position_params", "m_down1.3.trans_block.msa.embedding_layer.weight", "m_down1.3.trans_block.msa.embedding_layer.bias", "m_down1.3.trans_block.msa.linear.weight", "m_down1.3.trans_block.msa.linear.bias", "m_down1.3.trans_block.ln2.weight", "m_down1.3.trans_block.ln2.bias", "m_down1.3.trans_block.mlp.0.weight", "m_down1.3.trans_block.mlp.0.bias", "m_down1.3.trans_block.mlp.2.weight", "m_down1.3.trans_block.mlp.2.bias", "m_down1.3.conv1_1.weight", "m_down1.3.conv1_1.bias", "m_down1.3.conv1_2.weight", "m_down1.3.conv1_2.bias", "m_down1.3.conv_block.0.weight", "m_down1.3.conv_block.2.weight", "m_down1.4.weight", "m_down1.2.trans_block.ln1.weight", "m_down1.2.trans_block.ln1.bias", "m_down1.2.trans_block.msa.relative_position_params", "m_down1.2.trans_block.msa.embedding_layer.weight", "m_down1.2.trans_block.msa.embedding_layer.bias", "m_down1.2.trans_block.msa.linear.weight", "m_down1.2.trans_block.msa.linear.bias", "m_down1.2.trans_block.ln2.weight", "m_down1.2.trans_block.ln2.bias", "m_down1.2.trans_block.mlp.0.weight", "m_down1.2.trans_block.mlp.0.bias", "m_down1.2.trans_block.mlp.2.weight", "m_down1.2.trans_block.mlp.2.bias", "m_down1.2.conv1_1.weight", "m_down1.2.conv1_1.bias", "m_down1.2.conv1_2.weight", "m_down1.2.conv1_2.bias", "m_down1.2.conv_block.0.weight", "m_down1.2.conv_block.2.weight", "m_down2.3.trans_block.ln1.weight", "m_down2.3.trans_block.ln1.bias", "m_down2.3.trans_block.msa.relative_position_params", "m_down2.3.trans_block.msa.embedding_layer.weight", "m_down2.3.trans_block.msa.embedding_layer.bias", "m_down2.3.trans_block.msa.linear.weight", "m_down2.3.trans_block.msa.linear.bias", "m_down2.3.trans_block.ln2.weight", "m_down2.3.trans_block.ln2.bias", "m_down2.3.trans_block.mlp.0.weight", "m_down2.3.trans_block.mlp.0.bias", "m_down2.3.trans_block.mlp.2.weight", "m_down2.3.trans_block.mlp.2.bias", "m_down2.3.conv1_1.weight", "m_down2.3.conv1_1.bias", "m_down2.3.conv1_2.weight", "m_down2.3.conv1_2.bias", "m_down2.3.conv_block.0.weight", "m_down2.3.conv_block.2.weight", "m_down2.4.weight", "m_down2.2.trans_block.ln1.weight", "m_down2.2.trans_block.ln1.bias", "m_down2.2.trans_block.msa.relative_position_params", "m_down2.2.trans_block.msa.embedding_layer.weight", "m_down2.2.trans_block.msa.embedding_layer.bias", "m_down2.2.trans_block.msa.linear.weight", "m_down2.2.trans_block.msa.linear.bias", "m_down2.2.trans_block.ln2.weight", "m_down2.2.trans_block.ln2.bias", "m_down2.2.trans_block.mlp.0.weight", "m_down2.2.trans_block.mlp.0.bias", "m_down2.2.trans_block.mlp.2.weight", "m_down2.2.trans_block.mlp.2.bias", "m_down2.2.conv1_1.weight", "m_down2.2.conv1_1.bias", "m_down2.2.conv1_2.weight", "m_down2.2.conv1_2.bias", "m_down2.2.conv_block.0.weight", "m_down2.2.conv_block.2.weight", "m_down3.3.trans_block.ln1.weight", "m_down3.3.trans_block.ln1.bias", "m_down3.3.trans_block.msa.relative_position_params", "m_down3.3.trans_block.msa.embedding_layer.weight", "m_down3.3.trans_block.msa.embedding_layer.bias", "m_down3.3.trans_block.msa.linear.weight", "m_down3.3.trans_block.msa.linear.bias", "m_down3.3.trans_block.ln2.weight", "m_down3.3.trans_block.ln2.bias", "m_down3.3.trans_block.mlp.0.weight", "m_down3.3.trans_block.mlp.0.bias", "m_down3.3.trans_block.mlp.2.weight", "m_down3.3.trans_block.mlp.2.bias", "m_down3.3.conv1_1.weight", "m_down3.3.conv1_1.bias", "m_down3.3.conv1_2.weight", "m_down3.3.conv1_2.bias", "m_down3.3.conv_block.0.weight", "m_down3.3.conv_block.2.weight", "m_down3.4.weight", "m_down3.2.trans_block.ln1.weight", "m_down3.2.trans_block.ln1.bias", "m_down3.2.trans_block.msa.relative_position_params", "m_down3.2.trans_block.msa.embedding_layer.weight", "m_down3.2.trans_block.msa.embedding_layer.bias", "m_down3.2.trans_block.msa.linear.weight", "m_down3.2.trans_block.msa.linear.bias", "m_down3.2.trans_block.ln2.weight", "m_down3.2.trans_block.ln2.bias", "m_down3.2.trans_block.mlp.0.weight", "m_down3.2.trans_block.mlp.0.bias", "m_down3.2.trans_block.mlp.2.weight", "m_down3.2.trans_block.mlp.2.bias", "m_down3.2.conv1_1.weight", "m_down3.2.conv1_1.bias", "m_down3.2.conv1_2.weight", "m_down3.2.conv1_2.bias", "m_down3.2.conv_block.0.weight", "m_down3.2.conv_block.2.weight", "m_body.2.trans_block.ln1.weight", "m_body.2.trans_block.ln1.bias", "m_body.2.trans_block.msa.relative_position_params", "m_body.2.trans_block.msa.embedding_layer.weight", "m_body.2.trans_block.msa.embedding_layer.bias", "m_body.2.trans_block.msa.linear.weight", "m_body.2.trans_block.msa.linear.bias", "m_body.2.trans_block.ln2.weight", "m_body.2.trans_block.ln2.bias", "m_body.2.trans_block.mlp.0.weight", "m_body.2.trans_block.mlp.0.bias", "m_body.2.trans_block.mlp.2.weight", "m_body.2.trans_block.mlp.2.bias", "m_body.2.conv1_1.weight", "m_body.2.conv1_1.bias", "m_body.2.conv1_2.weight", "m_body.2.conv1_2.bias", "m_body.2.conv_block.0.weight", "m_body.2.conv_block.2.weight", "m_body.3.trans_block.ln1.weight", "m_body.3.trans_block.ln1.bias", "m_body.3.trans_block.msa.relative_position_params", "m_body.3.trans_block.msa.embedding_layer.weight", "m_body.3.trans_block.msa.embedding_layer.bias", "m_body.3.trans_block.msa.linear.weight", "m_body.3.trans_block.msa.linear.bias", "m_body.3.trans_block.ln2.weight", "m_body.3.trans_block.ln2.bias", "m_body.3.trans_block.mlp.0.weight", "m_body.3.trans_block.mlp.0.bias", "m_body.3.trans_block.mlp.2.weight", "m_body.3.trans_block.mlp.2.bias", "m_body.3.conv1_1.weight", "m_body.3.conv1_1.bias", "m_body.3.conv1_2.weight", "m_body.3.conv1_2.bias", "m_body.3.conv_block.0.weight", "m_body.3.conv_block.2.weight", "m_up3.3.trans_block.ln1.weight", "m_up3.3.trans_block.ln1.bias", "m_up3.3.trans_block.msa.relative_position_params", "m_up3.3.trans_block.msa.embedding_layer.weight", "m_up3.3.trans_block.msa.embedding_layer.bias", "m_up3.3.trans_block.msa.linear.weight", "m_up3.3.trans_block.msa.linear.bias", "m_up3.3.trans_block.ln2.weight", "m_up3.3.trans_block.ln2.bias", "m_up3.3.trans_block.mlp.0.weight", "m_up3.3.trans_block.mlp.0.bias", "m_up3.3.trans_block.mlp.2.weight", "m_up3.3.trans_block.mlp.2.bias", "m_up3.3.conv1_1.weight", "m_up3.3.conv1_1.bias", "m_up3.3.conv1_2.weight", "m_up3.3.conv1_2.bias", "m_up3.3.conv_block.0.weight", "m_up3.3.conv_block.2.weight", "m_up3.4.trans_block.ln1.weight", "m_up3.4.trans_block.ln1.bias", "m_up3.4.trans_block.msa.relative_position_params", "m_up3.4.trans_block.msa.embedding_layer.weight", "m_up3.4.trans_block.msa.embedding_layer.bias", "m_up3.4.trans_block.msa.linear.weight", "m_up3.4.trans_block.msa.linear.bias", "m_up3.4.trans_block.ln2.weight", "m_up3.4.trans_block.ln2.bias", "m_up3.4.trans_block.mlp.0.weight", "m_up3.4.trans_block.mlp.0.bias", "m_up3.4.trans_block.mlp.2.weight", "m_up3.4.trans_block.mlp.2.bias", "m_up3.4.conv1_1.weight", "m_up3.4.conv1_1.bias", "m_up3.4.conv1_2.weight", "m_up3.4.conv1_2.bias", "m_up3.4.conv_block.0.weight", "m_up3.4.conv_block.2.weight", "m_up2.3.trans_block.ln1.weight", "m_up2.3.trans_block.ln1.bias", "m_up2.3.trans_block.msa.relative_position_params", "m_up2.3.trans_block.msa.embedding_layer.weight", "m_up2.3.trans_block.msa.embedding_layer.bias", "m_up2.3.trans_block.msa.linear.weight", "m_up2.3.trans_block.msa.linear.bias", "m_up2.3.trans_block.ln2.weight", "m_up2.3.trans_block.ln2.bias", "m_up2.3.trans_block.mlp.0.weight", "m_up2.3.trans_block.mlp.0.bias", "m_up2.3.trans_block.mlp.2.weight", "m_up2.3.trans_block.mlp.2.bias", "m_up2.3.conv1_1.weight", "m_up2.3.conv1_1.bias", "m_up2.3.conv1_2.weight", "m_up2.3.conv1_2.bias", "m_up2.3.conv_block.0.weight", "m_up2.3.conv_block.2.weight", "m_up2.4.trans_block.ln1.weight", "m_up2.4.trans_block.ln1.bias", "m_up2.4.trans_block.msa.relative_position_params", "m_up2.4.trans_block.msa.embedding_layer.weight", "m_up2.4.trans_block.msa.embedding_layer.bias", "m_up2.4.trans_block.msa.linear.weight", "m_up2.4.trans_block.msa.linear.bias", "m_up2.4.trans_block.ln2.weight", "m_up2.4.trans_block.ln2.bias", "m_up2.4.trans_block.mlp.0.weight", "m_up2.4.trans_block.mlp.0.bias", "m_up2.4.trans_block.mlp.2.weight", "m_up2.4.trans_block.mlp.2.bias", "m_up2.4.conv1_1.weight", "m_up2.4.conv1_1.bias", "m_up2.4.conv1_2.weight", "m_up2.4.conv1_2.bias", "m_up2.4.conv_block.0.weight", "m_up2.4.conv_block.2.weight", "m_up1.3.trans_block.ln1.weight", "m_up1.3.trans_block.ln1.bias", "m_up1.3.trans_block.msa.relative_position_params", "m_up1.3.trans_block.msa.embedding_layer.weight", "m_up1.3.trans_block.msa.embedding_layer.bias", "m_up1.3.trans_block.msa.linear.weight", "m_up1.3.trans_block.msa.linear.bias", "m_up1.3.trans_block.ln2.weight", "m_up1.3.trans_block.ln2.bias", "m_up1.3.trans_block.mlp.0.weight", "m_up1.3.trans_block.mlp.0.bias", "m_up1.3.trans_block.mlp.2.weight", "m_up1.3.trans_block.mlp.2.bias", "m_up1.3.conv1_1.weight", "m_up1.3.conv1_1.bias", "m_up1.3.conv1_2.weight", "m_up1.3.conv1_2.bias", "m_up1.3.conv_block.0.weight", "m_up1.3.conv_block.2.weight", "m_up1.4.trans_block.ln1.weight", "m_up1.4.trans_block.ln1.bias", "m_up1.4.trans_block.msa.relative_position_params", "m_up1.4.trans_block.msa.embedding_layer.weight", "m_up1.4.trans_block.msa.embedding_layer.bias", "m_up1.4.trans_block.msa.linear.weight", "m_up1.4.trans_block.msa.linear.bias", "m_up1.4.trans_block.ln2.weight", "m_up1.4.trans_block.ln2.bias", "m_up1.4.trans_block.mlp.0.weight", "m_up1.4.trans_block.mlp.0.bias", "m_up1.4.trans_block.mlp.2.weight", "m_up1.4.trans_block.mlp.2.bias", "m_up1.4.conv1_1.weight", "m_up1.4.conv1_1.bias", "m_up1.4.conv1_2.weight", "m_up1.4.conv1_2.bias", "m_up1.4.conv_block.0.weight", "m_up1.4.conv_block.2.weight".

import torch
import torch.onnx
from models.network_scunet import SCUNet  # Assuming this is the SCUNet model definition

def convert_to_onnx(model_path, onnx_path, input_shape=(1, 3, 256, 256)):
    # Load the pre-trained PyTorch model
    model = SCUNet()
    model.load_state_dict(torch.load(model_path))
    
    # Set the model to evaluation mode
    model.eval()

    # Define dummy input data
    dummy_input = torch.randn(input_shape)

    # Convert the model to ONNX format
    torch.onnx.export(model, dummy_input, onnx_path, verbose=True)

    print(f"Model converted to ONNX format and saved as {onnx_path}")

# Paths
model_path = "model_zoo/scunet_color_real_psnr.pth"
onnx_path = "./scunet_color_real_gan.onnx"

convert_to_onnx(model_path, onnx_path)

download pretrained_model

run code of "python main_download_pretrained_models.py --models "SCUNet" --model_dir "model_zoo"
" , i get the traceback
requests.exceptions.SSLError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /cszn/KAIR/releases/download/v1.0/scunet_gray_15.pth (Caused by SSLError(SSLZeroReturnError(6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1131)'))).
Could you give the other way to download the pretrain model?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.