tomtomtommi / hinet Goto Github PK
View Code? Open in Web Editor NEWOfficial PyTorch implementation of "HiNet: Deep Image Hiding by Invertible Network" (ICCV 2021)
Official PyTorch implementation of "HiNet: Deep Image Hiding by Invertible Network" (ICCV 2021)
你好,请问我用命令conda env create -f environment.yml创建环境总是报错ResolvePackageNotFound:,
这个怎么解决呢
Hi, the low frequency loss in your code seems to be different from what you describe in the paper. You only select the first image in the batch, each with 12 channels to calculate loss. (If so, it will has no difference with the guide loss.) And the lambda is also not 10. Could you please explain a little bit?
Thanks a lot for sharing this excellent work. However, according to the paper, the testing datasets include DIV2K [1] testing dataset with 100 images at resolution 1024 × 1024, ImageNet [25] with 50,000 images at resolution 256 × 256, and COCO [17] dataset with 5,000 images at resolution 256 × 256. It seems that images in the testing datasets are randomly chosen from their original datasets. Therefore, could you please release the testing datasets used in your paper?
I don't know if the author found that in the DIV2K validation set, there was a photo with a size of "0830. png" that had black edges on the top and bottom when cut to 1024x1024. After my testing, I found that this would lead to higher values when calculating PSNR and SSIM.
Hi, thank you very much for the sharing code. But there seems to be something wrong about a circular import.
In model.py
,you use from hinet import Hinet
,but in hinet.py
,you use from model import *
,which will cause a circular import bug.Actually,there is no need to use from model import *
in hinet.py
,import torch.nn as nn
is enough.
May I ask what is the difference between HiNet and IICNet? After reviewing the source code of IICNet, I found that its network structure is very similar to HINet, especially DenseBlock
InvArch
InvBlock
.
Thank you for sharing your code. I am trying your code and I do find the loss explosion problem. Do you know the inherent reason of it? Is there any better solution instead of restarting training with lower learning rate every time manually?
你好!请问一下论文在计算 PSNR 指标的时候使用的是 YCbCr 中的 Y 通道进行计算还是使用 RGB 通道进行计算呢?论文中似乎没有找到相关内容
paper的链接失效了,请问这篇paper能分享吗?
I learned that the determinant of the Jacobian matrix of a Invertible neural network should be equal to 1, but HiNet's determinant of the Jacobian matrix is not 1 seemingly.
Hello! Recently I make an attempt to reproduce your work using the default hyperparameters (lamda_reconstruction = 5
lamda_guide = 1,lamda_low_frequency = 1, etc.) . But I observe that the loss value, especially the reconstruction loss will soar to extremely high, for example, from 0.0007 to 971213285 just between 10 mini-batches. Also, the revealed images turn to grean ones and appear terrible. Althouth the loss value may gradually decrease again, but that is vary strange and I can't figure it out. Did you encounter similar problems during your experiments? Could you explain a little bit and offer a possible solution? I will appreciate it if you reply.
Hello, may I ask where is the visualization image generated by the training. After I modified the address in the config.py, I did not save the relevant image during training. In addition, do you consider the robustness of steganography, whether it can resist jpeg compression, quantization and other operations?
Do you consider quantization step? Could you tell me where it is? Thank you very much.
What is the MAE and RMSE calculated on? RGB image or Y channel?
Thank you for sharing your code! But I find out that the hyperparameter of loss function(lamda_reconstruction and lamda_low_frequency) in your code is different from the paper, which one I should use?
Hi, Tommy. Firstly, congrats for your publication in ICCV2021.
But I found a small bug in your implementation. The batchsize_val in config.py should be at least 2*number of gpus and it should be divisible by number of gpus, otherwise during backward stage of validation, there will be a error as following.
"TypeError: forward() missing 1 required positional argument: 'x'"
Besides, I tried batch_size 16, it takes around 22G of cuda memory to run. Hope you can add this reminder to the README as a kind notification for those who seeks to experiment and learn something from your valuable code and great work. Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.