Giter Site home page Giter Site logo

hinet's People

Contributors

tomtomtommi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

hinet's Issues

environment.yml创建环境出错

你好,请问我用命令conda env create -f environment.yml创建环境总是报错ResolvePackageNotFound:,
这个怎么解决呢

Confusion about low frequency loss

Hi, the low frequency loss in your code seems to be different from what you describe in the paper. You only select the first image in the batch, each with 12 channels to calculate loss. (If so, it will has no difference with the guide loss.) And the lambda is also not 10. Could you please explain a little bit?

Hi, how to obtain your testing sets?

Thanks a lot for sharing this excellent work. However, according to the paper, the testing datasets include DIV2K [1] testing dataset with 100 images at resolution 1024 × 1024, ImageNet [25] with 50,000 images at resolution 256 × 256, and COCO [17] dataset with 5,000 images at resolution 256 × 256. It seems that images in the testing datasets are randomly chosen from their original datasets. Therefore, could you please release the testing datasets used in your paper?

about the size of the validation set

I don't know if the author found that in the DIV2K validation set, there was a photo with a size of "0830. png" that had black edges on the top and bottom when cut to 1024x1024. After my testing, I found that this would lead to higher values when calculating PSNR and SSIM.

A circular import bug

Hi, thank you very much for the sharing code. But there seems to be something wrong about a circular import.
In model.py,you use from hinet import Hinet,but in hinet.py,you use from model import *,which will cause a circular import bug.Actually,there is no need to use from model import * in hinet.py,import torch.nn as nn is enough.

Solution to loss explosion

Thank you for sharing your code. I am trying your code and I do find the loss explosion problem. Do you know the inherent reason of it? Is there any better solution instead of restarting training with lower learning rate every time manually?

PSNR指标计算问题

你好!请问一下论文在计算 PSNR 指标的时候使用的是 YCbCr 中的 Y 通道进行计算还是使用 RGB 通道进行计算呢?论文中似乎没有找到相关内容

Hard to repreduce the performance

Thanks for your code! But I can not repreduce the performance with the same training settings. The model are prone to degradation during training. Looking forward to your reply!

loss

Can HiNet be proved to be invertible?

I learned that the determinant of the Jacobian matrix of a Invertible neural network should be equal to 1, but HiNet's determinant of the Jacobian matrix is not 1 seemingly.

The loss value soar very high suddenly

Hello! Recently I make an attempt to reproduce your work using the default hyperparameters (lamda_reconstruction = 5
lamda_guide = 1,lamda_low_frequency = 1, etc.) . But I observe that the loss value, especially the reconstruction loss will soar to extremely high, for example, from 0.0007 to 971213285 just between 10 mini-batches. Also, the revealed images turn to grean ones and appear terrible. Althouth the loss value may gradually decrease again, but that is vary strange and I can't figure it out. Did you encounter similar problems during your experiments? Could you explain a little bit and offer a possible solution? I will appreciate it if you reply.

where is the visualization image generated by the training

Hello, may I ask where is the visualization image generated by the training. After I modified the address in the config.py, I did not save the relevant image during training. In addition, do you consider the robustness of steganography, whether it can resist jpeg compression, quantization and other operations?

quantization?

Do you consider quantization step? Could you tell me where it is? Thank you very much.

Something about loss function.

Thank you for sharing your code! But I find out that the hyperparameter of loss function(lamda_reconstruction and lamda_low_frequency) in your code is different from the paper, which one I should use?

bug with batchsize_val in config.py

Hi, Tommy. Firstly, congrats for your publication in ICCV2021.
But I found a small bug in your implementation. The batchsize_val in config.py should be at least 2*number of gpus and it should be divisible by number of gpus, otherwise during backward stage of validation, there will be a error as following.
"TypeError: forward() missing 1 required positional argument: 'x'"
Besides, I tried batch_size 16, it takes around 22G of cuda memory to run. Hope you can add this reminder to the README as a kind notification for those who seeks to experiment and learn something from your valuable code and great work. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.