Giter Site home page Giter Site logo

mspfn's Introduction

Multi-Scale Progressive Fusion Network for Single Image Deraining (MSPFN)

This is an implementation of the MSPFN model proposed in the paper (Multi-Scale Progressive Fusion Network for Single Image Deraining) with TensorFlow.

Requirements

  • Python 3
  • TensorFlow 1.12.0
  • OpenCV
  • tqdm
  • glob
  • sys

Motivation

The repetitive samples of rain streaks in a rain image as well as its multi-scale versions (multi-scale pyramid images) may carry complementary information (e.g., similar appearance) to characterize target rain streaks. We explore the multi-scale representation from input image scales and deep neural network representations in a unified framework, and propose a multi-scale progressive fusion network (MSPFN) to exploit the correlated information of rain streaks across scales for single image deraining.

Usage

I. Train the MSPFN model

Dataset Organization Form

If you prepare your own dataset, please follow the following form: |--train_data

|--rainysamples  
    |--file1
            :  
    |--file2
        :
    |--filen
    
|--clean samples
    |--file1
            :  
    |--file2
        :
    |--filen

Then you can produce the corresponding '.npy' in the '/train_data/npy' file.

$ python preprocessing.py

Training

Download training dataset ((raw images)Baidu Cloud, (Password:4qnh) (.npy)Baidu Cloud, (Password:gd2s)), or prepare your own dataset like above form.

Run the following commands:

cd ./model
python train_MSPFN.py 

II. Test the MSPFN model

Quick Test With the Raw Model (TEST_MSPFN_M17N1.PY)

Download the pretrained models (Baidu Cloud, (Password:u5v6)) (Google Drive).

Download the commonly used testing rain dataset (R100H, R100L, TEST100, TEST1200, TEST2800) (Google Drive), and the test samples and the labels of joint tasks form (BDD350, COCO350, BDD150) (Baidu Cloud, (Password:0e7o)). In addition, the test results of other competing models can be downloaded from here (TEST1200, TEST100, R100H, R100L).

Run the following commands:

cd ./model/test
python test_MSPFN.py

The deraining results will be in './test/test_data/MSPFN'. We only provide the baseline for comparison. There exists the gap (0.1-0.2db) between the provided model and the reported values in the paper, which originates in the subsequent fine-tuning of hyperparameters, training processes and constraints.

Test the Retraining Model With Your Own Dataset (TEST_MSPFN.PY)

Download the pre-trained models.

Put your dataset in './test/test_data/'.

Run the following commands:

cd ./model/test
python test_MSPFN.py

The deraining results will be in './test/test_data/MSPFN'.

Citation

@InProceedings{Kui_2020_CVPR,
	author = {Jiang, Kui and Wang, Zhongyuan and Yi, Peng and Chen, Chen and Huang, Baojin and Luo, Yimin and Ma, Jiayi and Jiang, Junjun},
	title = {Multi-Scale Progressive Fusion Network for Single Image Deraining},
	booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
	month = {June},
	year = {2020}
}
@ARTICLE{9294056,
  author={K. {Jiang} and Z. {Wang} and P. {Yi} and C. {Chen} and Z. {Han} and T. {Lu} and B. {Huang} and J. {Jiang}},
  journal={IEEE Transactions on Circuits and Systems for Video Technology}, 
  title={Decomposition Makes Better Rain Removal: An Improved Attention-guided Deraining Network}, 
  year={2020},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TCSVT.2020.3044887}}

mspfn's People

Contributors

kuijiang94 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mspfn's Issues

About the labels in detection and segmentation

Thanks for your great work. After downloading the BDD350, COCO350, BDD150, I cannot find any information about bounding box or segmentation? Could you provide the corresponding .json file or others?

pretrain model

Hi!,I want to know that where can I download pre-trained models?
I'm looking forward your reply as soon as pospible.

Rain200H

你好!
有没有可能提供下Rain200H的训练和测试结果。我使用此代码训练的Rain200H,测试结果很差。
唯一不同的是数据npy数据,我是裁剪成128*128大小的
谢谢~

is model.variables model.g_variables in train_MSPFN.py file?

Hello,
in train_MSPFN.py file, there is line:
train_op = opt.minimize(model.train_loss, global_step=global_step, var_list=model.variables)
However, there is no model.variables in MODEL class.
is it model.g_variables instead of model.variables?

Several issues related to protobuff and "Tensorflow" incompatibility issues

Traceback (most recent call last):
File "C:\Users\batci\Downloads\2024\MSPFN-master\MSPFN-master\model\test\test_MSPFN.py", line 7, in
import tensorflow as tf
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow_init_.py", line 37, in
from tensorflow.python.tools import module_util as module_util
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python_init
.py", line 42, in
from tensorflow.python import data
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\data_init_.py", line 21, in
from tensorflow.python.data import experimental
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\data\experimental_init_.py", line 96, in
from tensorflow.python.data.experimental import service
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\data\experimental\service_init_.py", line 419, in
from tensorflow.python.data.experimental.ops.data_service_ops import distribute
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\data\experimental\ops\data_service_ops.py", line 24, in
from tensorflow.python.data.experimental.ops import compression_ops
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\data\experimental\ops\compression_ops.py", line 16, in
from tensorflow.python.data.util import structure
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\data\util\structure.py", line 23, in
from tensorflow.python.data.util import nest
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\data\util\nest.py", line 36, in
from tensorflow.python.framework import sparse_tensor as _sparse_tensor
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\framework\sparse_tensor.py", line 24, in
from tensorflow.python.framework import constant_op
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\framework\constant_op.py", line 25, in
from tensorflow.python.eager import execute
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\eager\execute.py", line 23, in
from tensorflow.python.framework import dtypes
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\framework\dtypes.py", line 42, in
class DType(
File "C:\Users\batci\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\framework\dtypes.py", line 202, in DType
def experimental_type_proto(cls) -> Type[types_pb2.SerializedDType]:
AttributeError: module 'tensorflow.core.framework.types_pb2' has no attribute 'SerializedDType'
!pip install --upgrade protobuf
Requirement already satisfied: protobuf in c:\users\batci\anaconda3\envs\tfgpu\lib\site-packages (3.18.0)
Collecting protobuf
Using cached protobuf-4.25.2-cp39-cp39-win_amd64.whl.metadata (541 bytes)
Using cached protobuf-4.25.2-cp39-cp39-win_amd64.whl (413 kB)
Installing collected packages: protobuf
Attempting uninstall: protobuf
Found existing installation: protobuf 3.18.0
Uninstalling protobuf-3.18.0:
Successfully uninstalled protobuf-3.18.0
Successfully installed protobuf-4.25.2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow 2.10.1 requires keras<2.11,>=2.10.0, but you have keras 2.8.0 which is incompatible.
tensorflow 2.10.1 requires protobuf<3.20,>=3.9.2, but you have protobuf 4.25.2 which is incompatible.
tensorflow 2.10.1 requires tensorboard<2.11,>=2.10, but you have tensorboard 2.8.0 which is incompatible.
tensorflow 2.10.1 requires tensorflow-estimator<2.11,>=2.10.0, but you have tensorflow-estimator 2.8.0 which is incompatible.
9
!pip install protobuf==3.19.0
Collecting protobuf==3.19.0
Downloading protobuf-3.19.0-cp39-cp39-win_amd64.whl (895 kB)
---------------------------------------- 0.0/895.7 kB ? eta -:--:--
- ----------------------------------- 41.0/895.7 kB 960.0 kB/s eta 0:00:01
------- ------------------------------ 174.1/895.7 kB 2.1 MB/s eta 0:00:01
-------------- ----------------------- 337.9/895.7 kB 3.0 MB/s eta 0:00:01
----------------------- -------------- 542.7/895.7 kB 3.4 MB/s eta 0:00:01
----------------------------- -------- 686.1/895.7 kB 3.3 MB/s eta 0:00:01
------------------------------------- 890.9/895.7 kB 3.7 MB/s eta 0:00:01
-------------------------------------- 895.7/895.7 kB 3.5 MB/s eta 0:00:00
Installing collected packages: protobuf
Attempting uninstall: protobuf
Found existing installation: protobuf 4.25.2
Uninstalling protobuf-4.25.2:
Successfully uninstalled protobuf-4.25.2
Successfully installed protobuf-3.19.0
WARNING: Failed to remove contents in a temporary directory 'C:\Users\batci\anaconda3\envs\tfgpu\Lib\site-packages\google~upb'.
You can safely remove it manually.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorboardx 2.6.2.2 requires protobuf>=3.20, but you have protobuf 3.19.0 which is incompatible.
tensorflow 2.10.1 requires keras<2.11,>=2.10.0, but you have keras 2.8.0 which is incompatible.
tensorflow 2.10.1 requires tensorboard<2.11,>=2.10, but you have tensorboard 2.8.0 which is incompatible.
tensorflow 2.10.1 requires tensorflow-estimator<2.11,>=2.10.0, but you have tensorflow-estimator 2.8.0 which is incompatible.
!pip install protobuf
Requirement already satisfied: protobuf in c:\users\batci\anaconda3\envs\tfgpu\lib\site-packages (3.18.0)
import tensorflow as tf

tf.version


TypeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 import tensorflow as tf

File ~\anaconda3\envs\tfgpu\lib\site-packages\tensorflow_init_.py:37
34 import sys as _sys
35 import typing as _typing
---> 37 from tensorflow.python.tools import module_util as _module_util
38 from tensorflow.python.util.lazy_loader import LazyLoader as _LazyLoader
40 # Make sure code inside the TensorFlow codebase can use tf2.enabled() at import.

File ~\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python_init_.py:37
29 # We aim to keep this file minimal and ideally remove completely.
30 # If you are adding a new file with @tf_export decorators,
31 # import it in modules_with_exports.py instead.
32
33 # go/tf-wildcard-import
34 # pylint: disable=wildcard-import,g-bad-import-order,g-import-not-at-top
36 from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow
---> 37 from tensorflow.python.eager import context
39 # pylint: enable=wildcard-import
40
41 # Bring in subpackages.
42 from tensorflow.python import data

File ~\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\python\eager\context.py:29
26 import numpy as np
27 import six
---> 29 from tensorflow.core.framework import function_pb2
30 from tensorflow.core.protobuf import config_pb2
31 from tensorflow.core.protobuf import coordination_config_pb2

File ~\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\core\framework\function_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
17 from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2
18 from tensorflow.core.framework import op_def_pb2 as tensorflow_dot_core_dot_framework_dot_op__def__pb2

File ~\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2

File ~\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\core\framework\tensor_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
17 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
18 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2

File ~\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py:16
11 # @@protoc_insertion_point(imports)
13 _sym_db = _symbol_database.Default()
---> 16 from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
17 from tensorflow.core.framework import types_pb2 as tensorflow_dot_core_dot_framework_dot_types__pb2
20 DESCRIPTOR = _descriptor.FileDescriptor(
21 name='tensorflow/core/framework/resource_handle.proto',
22 package='tensorflow',
(...)
26 ,
27 dependencies=[tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2.DESCRIPTOR,tensorflow_dot_core_dot_framework_dot_types__pb2.DESCRIPTOR,])

File ~\anaconda3\envs\tfgpu\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py:36
13 _sym_db = _symbol_database.Default()
18 DESCRIPTOR = _descriptor.FileDescriptor(
19 name='tensorflow/core/framework/tensor_shape.proto',
20 package='tensorflow',
(...)
23 serialized_pb=_b('\n,tensorflow/core/framework/tensor_shape.proto\x12\ntensorflow"z\n\x10TensorShapeProto\x12-\n\x03\x64im\x18\x02 \x03(\x0b\x32 .tensorflow.TensorShapeProto.Dim\x12\x14\n\x0cunknown_rank\x18\x03 \x01(\x08\x1a!\n\x03\x44im\x12\x0c\n\x04size\x18\x01 \x01(\x03\x12\x0c\n\x04name\x18\x02 \x01(\tB\x87\x01\n\x18org.tensorflow.frameworkB\x11TensorShapeProtosP\x01ZSgithub.com/tensorflow/tensorflow/tensorflow/go/core/framework/tensor_shape_go_proto\xf8\x01\x01\x62\x06proto3')
24 )
29 _TENSORSHAPEPROTO_DIM = _descriptor.Descriptor(
30 name='Dim',
31 full_name='tensorflow.TensorShapeProto.Dim',
32 filename=None,
33 file=DESCRIPTOR,
34 containing_type=None,
35 fields=[
---> 36 _descriptor.FieldDescriptor(
37 name='size', full_name='tensorflow.TensorShapeProto.Dim.size', index=0,
38 number=1, type=3, cpp_type=2, label=1,
39 has_default_value=False, default_value=0,
40 message_type=None, enum_type=None, containing_type=None,
41 is_extension=False, extension_scope=None,
42 serialized_options=None, file=DESCRIPTOR),
43 _descriptor.FieldDescriptor(
44 name='name', full_name='tensorflow.TensorShapeProto.Dim.name', index=1,
45 number=2, type=9, cpp_type=9, label=1,
46 has_default_value=False, default_value=_b("").decode('utf-8'),
47 message_type=None, enum_type=None, containing_type=None,
48 is_extension=False, extension_scope=None,
49 serialized_options=None, file=DESCRIPTOR),
50 ],
51 extensions=[
52 ],
53 nested_types=[],
54 enum_types=[
55 ],
56 serialized_options=None,
57 is_extendable=False,
58 syntax='proto3',
59 extension_ranges=[],
60 oneofs=[
61 ],
62 serialized_start=149,
63 serialized_end=182,
64 )
66 _TENSORSHAPEPROTO = _descriptor.Descriptor(
67 name='TensorShapeProto',
68 full_name='tensorflow.TensorShapeProto',
(...)
100 serialized_end=182,
101 )
103 _TENSORSHAPEPROTO_DIM.containing_type = _TENSORSHAPEPROTO

File ~\anaconda3\envs\tfgpu\lib\site-packages\google\protobuf\descriptor.py:553, in FieldDescriptor.new(cls, name, full_name, index, number, type, cpp_type, label, default_value, message_type, enum_type, containing_type, is_extension, extension_scope, options, serialized_options, has_default_value, containing_oneof, json_name, file, create_key)
547 def new(cls, name, full_name, index, number, type, cpp_type, label,
548 default_value, message_type, enum_type, containing_type,
549 is_extension, extension_scope, options=None,
550 serialized_options=None,
551 has_default_value=True, containing_oneof=None, json_name=None,
552 file=None, create_key=None): # pylint: disable=redefined-builtin
--> 553 _message.Message._CheckCalledFromGeneratedFile()
554 if is_extension:
555 return _message.default_pool.FindExtensionByName(full_name)

TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:

  1. Downgrade the protobuf package to 3.20.x or lower.
  2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

关于测试结果

您好,最近看了您的论文非常感兴趣。我用Readme里提到的预训练模型进行测试,用的是epoch44测试的结果。我用PreNet提供的指标计算代码在Rain100H上,您的实验指标仅有 psnr=28.2350, ssim=0.8506,PreNet psnr是可以达到29以上的,想请问下这是什么原因造成的?谢谢!

使用.npy数据训练时出现问题

File "train_MSPFN.py", line 40, in train
x_train, x_test, x_train_rain, x_test_rain = load_rain.load()
File "/home/ubuntu/桌面/MSPFN-master/model/load_rain.py", line 6, in load
x_train_rain = np.load('../model/train_data/npy/train_rain.npy')
File "/home/ubuntu/anaconda3/envs/WeY/lib/python3.6/site-packages/numpy/lib/npyio.py", line 440, in load
pickle_kwargs=pickle_kwargs)
File "/home/ubuntu/anaconda3/envs/WeY/lib/python3.6/site-packages/numpy/lib/format.py", line 771, in read_array
array.shape = shape
ValueError: cannot reshape array of size 496291712 into shape (137609,96,96,3)
这个问题如何解决呢?是什么原因造成的呢?谢谢,期待您的回复!我使用的是网址给的.npy数据

downloading pretrain models

Hi kuihua, firstly thanks for sharing your work!

I was trying to download pretrained model and other stuff through Baidu's cloud but I wasn't able to succeed in both Linux and Mac. Is there any chance for you to upload the models and datasets in different platform such as google drive?

Thank you.

About paper reproduction problem

Hello, I want to ask some questions.

The model I trained before loading in the training set does not have problems, but loading the trained model in the test set will report an error:(我在train_MSPFN导入上次训练结果不报错,但是在test_MSPFN上导入训练结果就会报错)
During handling of the above exception, another exception occurred:a Variable name or other graph key that is missing
detail are as follows:
Traceback (most recent call last):
File "E:/bwl_python/MSPFN-me-7.3/model/test/test_MSPFN.py", line 48, in
saver.restore(sess, '../MSPFN/epoch6')#93
File "E:\anaconda\path\envs\bwltfgpu\lib\site-packages\tensorflow\python\training\saver.py", line 1302, in restore
err, "a Variable name or other graph key that is missing")
tensorflow.python.framework.errors_impl.NotFoundError: Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
2 root error(s) found.
(0) Not found: Key generator/BCM2_0/down2_1/alpha not found in checkpoint
[[node save/RestoreV2 (defined at /bwl_python/MSPFN-me-7.3/model/test/test_MSPFN.py:47) ]]
[[save/RestoreV2/_453]]
(1) Not found: Key generator/BCM2_0/down2_1/alpha not found in checkpoint
[[node save/RestoreV2 (defined at /bwl_python/MSPFN-me-7.3/model/test/test_MSPFN.py:47) ]]
0 successful operations.
0 derived errors ignored.

In addition, what is the specific version of your tensorflow? 1.1? 1.14? (Ps, I have some version errors when using 1.1),such as AttributeError: module 'tensorflow' has no attribute 'AUTO_REUSE' when I use the tensorflow1.1

when the epoch=5(batch size=12,input_image is 480*320),the train_loss and the edge_loss are not change.
train_loss=0.00105,the edge_loss=0.0010004.........

looking forward to your reply, thank you!

I have a few questions!

Thank you for sharing your work.
In your paper,

  1. Why did you use conv-lstm instead of norma lstm ?
  2. At 1/4 scale, the FFM module seems to just concat. What was the purpose of using it? Is it just for consistency?

Thanks.

About Synthesis Rain

Hello, authors! I would like to know how you synthesis the rain for task-driven datasets. In the paper, you mention that you "create three new synthesis rain datasets ... through Photoshop". Could you provide a link for how you made those rain streaks? Thanks in advance.

Hello! About Dataset...

The original Rain1400 dataset consist of 12600 training samples and 1400 test samples.
Referring to table 1 in your paper, it appears that 1,400 images were taken out of the training samples and added to the test samples.
What is the criteria for choosing 1400 images?

Pre-trained models of competing methods

Hi @kuihua

Since you have re-trained the previous methods (RESCAN, DIDMDN, UMRL, SEMI, PreNet) for a fair comparison, could you please share the weights and model files of these methods?

Thank you.

inference time

I also test on the Titan Xp GPU, but every image needs 5s?

关于Table4

PreNet.最后Rain100l和100H的数据集都达到了37.48和29.46,似乎不像你在文中描述的这么低

Replicating Paper results using pretrained weights

Hi,
For replicating the results of the paper, here are the steps I followed:

  1. Downloaded pretrained weights using this link. I downloaded epoch44 as that was the latest model available at the link.

  2. Placed TEST1200 images crops of size 512x512 in test_data directory.

  3. Placed the files checkpoint, epoch44.data-00000-of-00001 and epoch44.index in MSPFN/model/MSPFN

  4. Replaced the lines in MSPFN/model/test/test_MSPFN.py
    img_path = '.\\test_data\\TEST100\\inputcrop\\' to img_path = './test_data/TEST1200/inputcrop/
    save_path = '.\\test_data\\MSPFN\\' to save_path = './test_data/MSPFN/
    saver.restore(sess, '../MSPFN/epoch50') to saver.restore(sess, '../MSPFN/epoch44')

  5. Calculate the PSNR.

However, I get a PSNR of 30.34 as compared to 32.39 reported in the paper.
Could you please tell what other modifications are required to replicated the results of the paper.

Thanks

test model and train model are different?

I carefully checked the generator function in the MSPFN.py file and the generator function in the TEST_MSPFN.py file, and I found that they are different. Does it mean that the model trained in the training set is not the same as the model in the test set? E.g
In TEST_MSPFN.py, line 183, with tf.variable_scope('BCM_{}'.format(n)):
But there is no BCM_{} in MSPFN, the corresponding is with tf.variable_scope('URAB_{}'.format(n)) in MSPFN:

Other Applications In paper

Thank you for your work.

Could you please share three new synthetic rain datasets and code based on COCO350, BDD350, and BDD150 mentioned in your paper? This will be a great help in our efforts to follow up on this work. Thank you very much.
By the way, the Google Drive link below is disabled.
----Download the Testing dataset (R100H, R100L, TEST100, TEST1200, TEST2800) (Google Drive).----
How can i get the Testing dataset?

Thanks a lot.

About evaluation metric code of COCO/BDD

Dear Author,
Thank your for sharing the code and database.
I am not so familiar with the detection and segmentation area, thus I am wondering if you could share the link or the code of the evaluation metric on detection/segmentation results you used in the paper. I would appreciate your help!
Best,

About SSIM and PSNR

Hello, thank you for sharing this wonderful work.
I have a question. How are the metrics of PSNR and SSIM are computed? And the value of the corresponding parameter is?
Thanks.

test error

360截图167203288110975

when I test with my own pictures,it seems like it didn't work well ,could you tell me how can i correct the error ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.