Giter Site home page Giter Site logo

wbhu / dncnn-tensorflow Goto Github PK

View Code? Open in Web Editor NEW
363.0 363.0 150.0 117.21 MB

:octocat::octocat:A tensorflow implement of the paper "Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising"

License: GNU General Public License v3.0

Python 100.00%
dncnn image-denoising residual-learning tensorflow

dncnn-tensorflow's Introduction

DnCNN-tensorflow

AUR Contributions welcome

A tensorflow implement of the TIP2017 paper Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising

Model Architecture

graph

Results

compare

  • BSD68 Average Result

The average PSNR(dB) results of different methods on the BSD68 dataset.

Noise Level BM3D WNNM EPLL MLP CSF TNRD DnCNN-S DnCNN-B DnCNN-tensorflow
25 28.57 28.83 28.68 28.96 28.74 28.92 29.23 29.16 29.17
  • Set12 Average Result
Noise Level DnCNN-S DnCNN-tensorflow
25 30.44 30.38

Requirements

tensorflow >= 1.4
numpy
opencv

Dataset

I used the BDS500 dataset for training, you can download it here: http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/BSR/BSR_bsds500.tgz It contains 500 RGB images, 400 for training and 100 for testing.

Data preprocessing and noise generation

Before training, you have to rescale the images to 180x180 and adding noise to them. The folder structure is supposed to be:

./data/train/original  for the 180x180 original train images
./data/train/noisy  for the 180x180 noisy train images
./data/test/original  for the 180x180 original test images
./data/test/noisy  for the 180x180 noisy test images

You need the original files for testing just to calculate the PSNR. You can denoise without original files: just put the noisy files also in ./data/test/original .

Train

$ python main.py
(note: You can add command line arguments according to the source code, for example
    $ python main.py --batch_size 64 )

Test

$ python main.py --phase test

dncnn-tensorflow's People

Contributors

clausmichele avatar lizhiyuanustc avatar sdlpkxd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dncnn-tensorflow's Issues

Denoise the same picture with different nets

Hi, Thanks for your codes at the first. I am confused by a phenomenon. I have trained 3 nets with sigma = 5,10,15 respectively and I get a noised picture with sigma = 5. Then I used three trained nets to denoised the same picture, but I find the net trained with sigma = 5 performs worse than other two nets. And I find the nets with sigma = 15 performs best. I am confused by this phenomenon, do you know the reason?

Without the BN layer can performance well

Hello,When I was training the network, I did not add the BN layer, and the other structures were the same as the original. The training set used is only BSD40, and patch size is 40*40. In the test set12 on the PSNR is higher than the original, and the results from the loss convergence is also faster convergence than the BN layer.

the initialization of weights ?

hi, it is a good job for the work. I want to know the meaning of initialization about ''get_conv_weights''''get_bn_weights", like this:
`def get_conv_weights(weight_shape, sess, name="get_conv_weights"):
return math.sqrt(2 / (9.0 * 64)) * sess.run(tf.truncated_normal(weight_shape))

def get_bn_weights(weight_shape, clip_b, sess, name="get_bn_weights"):
weights = get_conv_weights(weight_shape, sess)
return clipping(weights, clip_b)

def clipping(A, clip_b, name="clipping"):
h, w = A.shape
for i in xrange(h):
for j in xrange(w):
if A[i,j] >= 0 and A[i,j] < clip_b:
A[i,j] = clip_b
elif A[i,j] > -clip_b and A[i,j] < 0:
A[i,j] = -clip_b
return A
`

Ask a question

Dear author, when I using your code, there are an error list as:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 295, in minimize
([str(v) for _, v in grads_and_vars], loss))
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ['Tensor("conv1/weights/read:0", shape=(3, 3, 1, 64), dtype=float32)', ......

What should I do.

Different patches for noisy and clean pairs

In your code, noisy image and the corresponding clean image are intermingled in the filenames list as a separate entry. Meanwhile, get_patches() generates random patches for each entry in the filenames. Accordingly, locations of the patches for noisy image and locations of the patches for the corresponding clean image are not consistent. Am I right?

Handling the variable image sizes

Is it possible to change your code to handle the variable size images?

In this case, you may need to change the fixed values '116' for computing point1 & point2 into variables: point1 = random.randint(0, image_hight - patch_size)
point2 = random.randint(0, image_width - patch_size)

However, values for image_hight and image_width are 'None' during tensorflow initialization and this cause a problem. Any smart way to handle this NonType value during initialization?

Blind gaussian denoising

Hello! Thank you for this amazing code!

I want to ask about the DnCNN-B model. How one would implement it using this code?

In the paper they say that the patch size is set to 50x50 and the number of layers increased to 20. They also nearly doubled the number of layers. What about the noise level? Do we just pick random sigma between [0,50] at each batch?

help

I run the main.py,but there are still some errors even some error is to random.py. why?I am a littlewhite,Wish some good people could help me,3x!

(base) C:\Users\liu\Downloads\DnCNN-tensorflow-master>python model.py

(base) C:\Users\liu\Downloads\DnCNN-tensorflow-master>python main.py
GPU

Traceback (most recent call last):
File "main.py", line 88, in
tf.app.run()
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "main.py", line 60, in main
model = denoiser(sess)
File "C:\Users\liu\Downloads\DnCNN-tensorflow-master\model.py", line 39, in init
self.dataset = dataset(sess)
File "C:\Users\liu\Downloads\DnCNN-tensorflow-master\model.py", line 202, in init
random.shuffle(ind)
File "C:\ProgramData\Anaconda3\lib\random.py", line 274, in shuffle
x[i], x[j] = x[j], x[i]
TypeError: 'range' object does not support item assignment

The PSNR result

Hi, Thanks for your tf code, it is amazing!
Now I am trying to reproduce the DnCNN-S under tensorflow, your code helps me a lot. But I found that I could not reach your result 29.17 even using the same training code from your github. I don't know whether we could get the same result 29.23 in the paper under tensorflow?

can you share a trained model?

Hi I'm trying to train your code(version of updated 2weeks ago).

but I'm not sure about I'm doing right...

denoised sample image is not good as i think at epoch 25

I'm going to run for epoch 50 but i don't think i can get better result.

can you share a trained weight model? it will be so help full.

The training time

I use gtx1080 to train, and didn't change your code, but it trained too slow, almost three hours to train a epoch

Bugs found in the source codes

  1. When I run the main.py using default paramters, I find the PSNR calculated in evaluation stage is weird, namely, it stays at about 3dB after 5 epoches. Then I find a bug in model.py. In the definition block of function evaluate(self, epoch, counter, test_data), the following line may have a bug, that is, test_data[idx] (it is read using PIL.Image.open, so the value range is [0, 255]) should not be multiplied by 255. This line
    groundtruth = np.clip(255 * test_data[idx], 0, 255).astype('uint8')
    may be modified to
    groundtruth = np.clip(test_data[idx], 0, 255).astype('uint8')

  2. But PSNR is still too low, about 4dB, which is too low since the noisy image with a sigma=25 already has a PSNR=20dB. Then I find a bug in utils.py. In the definition block of function cal_psnr(im1, im2), the calculation of mse has a bug, that is, the passed arguments may be a 4-D tensor of size (1, size1, size2, 1), so this line
    mse = (np.abs(im1 - im2) ** 2).sum() / (im1.shape[0] * im1.shape[1])
    may be modified to
    mse = (np.abs(im1 - im2) ** 2).mean())
    then the evaluated PSNR becomes 28dB, which is normal.

Same picture for denoised and noisy

20200407143156
After test, my image was still with noise the same as original files. Was there any problems with the code? And how to figure out the clear picture?

Loss doesn't seem to go down at all...

The loss with the code out of the box doesn't seem to want to go down at all. It fluctuates at around 5.9 and stays there without going down. Could you please tell me what this means? Thanks.

no image in log event file

hi , I did see the scalar plots and graph in the tensorboard logs, but somehow just no images saved in evaluate phase was showing up in tensorboard. I used the exact code downloaded.

Reproduce the model in checkpoint_demo

Hi Wenbo, thank you for sharing the code. Could you tell me how you trained the model in the checkpoint_demo folder, such as the training dataset and parameters? I tried but couldn't get a model as good as yours. Thanks in advance.

Why should I feed the clear image when running the TEST part?

output_clean_image = self.sess.run(
[self.Y],feed_dict={self.Y_: clean_image, self.X: noisy,
self.is_training: False})

This is weird and i cant understand it , outside result is [self.Y] and feed the self.Y its clean_image at the same time?Would you please explain it for me ,that would be helpful,thx!

Training became slower and slower

I noticed that the training could become slower and slower. The root cause is that in add_noise(), the following line will keep creating truncated_normal tensors each time add_noise() is called:
noise = sigma / 255.0 * sess.run(tf.truncated_normal(data.shape))

Since add_noise() is called many times in loops, it will create many truncated_normal tensor nodes that make the training become slower and slower.

The better solution shall be creating truncated_normal once outside the training loops, and pass the tensor to add_noise() as an argument. Something like:
#outside the training loops
norm_tensor = tf.truncated_normal(data.shape)
...
# inside the training loops
... = add_noise(image, sigma, sess, norm_tensor)

this problem also exists in evaluate() and test() but their impact is small compared to train()

'float' object cannot be interpreted as an integer

Traceback (most recent call last):
File "main.py", line 83, in
tf.app.run()
File "/home/xiaolan2/folder/project/temp/hone/xiaolan2/folder/project/anaconda3/envs/yourenvname/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/xiaolan2/folder/project/temp/hone/xiaolan2/folder/project/anaconda3/envs/yourenvname/lib/python3.6/site-packages/absl/app.py", line 300, in run
_run_main(main, args)
File "/home/xiaolan2/folder/project/temp/hone/xiaolan2/folder/project/anaconda3/envs/yourenvname/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main
sys.exit(main(argv))
File "main.py", line 57, in main
denoiser_train(model, lr=lr)
File "main.py", line 25, in denoiser_train
denoiser.train(eval_files, noisy_eval_files, batch_size=args.batch_size, ckpt_dir=args.ckpt_dir, epoch=args.epoch, lr=lr)
File "/mnt/project/grp_202/xiaolan2/DnCNN-tensorflow/model.py", line 114, in train
ind1 = range(res.shape[0]/2)
TypeError: 'float' object cannot be interpreted as an integer

ValueError: could not broadcast input array from shape (9,10,1) into shape (10,10,1)

Excuse me. I have run your code"generate_patches.py" and I get wrong like this:
Traceback (most recent call last):
File "/home/tian/tensorflow/example/Denoising/DnCNN-tensorflow-master/generate_patches.py", line 100, in
generate_patches()
File "/home/tian/tensorflow/example/Denoising/DnCNN-tensorflow-master/generate_patches.py", line 79, in generate_patches
random.randint(0, 7))
ValueError: could not broadcast input array from shape (9,10,1) into shape (10,10,1)

I don't understand your code so I can't fix it .Can you tell me how to deal with it ?
By the way ,can you tell me the meaning or the purpose of this code?
Thank you very much!

different results with mtconvnet

hi, i find in the same envionment, when i use the net to deal with image inpainting, the results of this code different with matconvnet, if the net is some difference with the paper?

--use_gpu=Fasle

Thank you for the project and I enter with some troubles when I run your code using the below commands:

python main.py --phase=test --use_gpu=False

I think it should print "CPU",but it prints "GPU" and "[!] Load failed..." Is there something wrong in the commands above?

结果的异常现象

test
图片显示第一行,结果是可以的。第二行结果细节处有明显的 类似“涂抹块”的异常。

error xrange

I run main.py
name Error :name 'Xrange' is not defined

log files

First of all great job!

I have a question about the print outputs (ex Epoch: [10] [2327/5198] time: xxx, loss: xxx)...

Are they being saved somewhere (in the log files for example)? And if yes, how can I access them?

I want to use them for plotting the train vs dev PSNR for every epoch.

Thank you!

weight initialization

Hello :)
Could you please help me with something?...
It is not clear to me in which part of your code you initialize the weights of the CNN.

Issue about the Loss

Dear Mr/Mrs

I am kindly considering about the loss parameter of the code. The later only stops from 3.4 to 3.6, then it doesn't decrease to around 0 . Hence, I write this topic to give a question that: " am I running the code correctly?"
I am so sorry that if my stupid question disturb you.
Thank for your attention very much.

Doubt about Loss.png

What is the x-axis in the file loss.png?
The y-axis is loss values, but x-axis isn't mentioned.

What is the loss value plotted against?

Denoised Pictures

Hello.I want to know there is trained weights about gaussin distribution which std is 0.1.I try your weights it is vague.

reuse = true error

I tried to run the program for testing (by setting phase = test in main.py) after training and the following error occured:

Variable conv1/weights already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:

File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)

This also had happened when I tried to run main.py multiple times consecutively. It seems the weights are created and stored somewhere, and subsequent runs are somehow conflicting with that and hence this error.

I get "MemoryError"

when I run 'generate_patches.py',I get this error.
Traceback (most recent call last):
File "/home/tian/tensorflow/example/Denoising/DnCNN-tensorflow-master/generate_patches.py", line 100, in
generate_patches()
File "/home/tian/tensorflow/example/Denoising/DnCNN-tensorflow-master/generate_patches.py", line 91, in generate_patches
inputs = inputs / 255.0 # normalize to [0, 1]
MemoryError

I have no GPU,and my computer memory is 10GB.

Memory Explode

This code is poorly written and can be tuned, but memory will continue to increase and will not be released until the memory explodes.

DnCNN last layer

DnCNN last layer: with tf.variable_scope('block17'): ---> ('block20')

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.