Comments (12)
A basic alignment is done to account for hand/camera motion during data capturing. (this alignment is not pixel-aligned and doesn't consider perspective misalignment, so we have CoBi)
The alignment is done between JPG pairs (e.g. if input is X.ARW, and target is Y.JPG, then the alignment is done between X.JPG and Y.JPG, and then applied to the output of the model).
from zoom-learn-zoom.
Thanks for your reply. I got it. So what you mean is that when you calculate the PSNR between the output rgb and the groundtruth , there still remains some misalignment and can't be avoidable.
from zoom-learn-zoom.
yes, but the misalignment exists for all the baseline method and ours in the same way so it's fair to compare for quantitative measurement.
from zoom-learn-zoom.
Sorry to trouble you again. I use "compute_unalign_loss"(tol=16, stride=2) funtion in your loss.py to find the best match in HR patch to calculate PSNR against the output rgb patch. But I can't gain >=20db in any image(due to misalignment, maybe plus some color mismatch), the output patch size is 512 in test phase, using the model x4 you released .
Can you give some tips what I missed to do please?
from zoom-learn-zoom.
did you pre-align the images using the scripts in this repo? e.g. run_align.sh?
from zoom-learn-zoom.
Yes, I used run_align.sh to get the tform.txt, which records the coordination transform matrix between 00001.jpg and 0000X.jpg. When I test, I croped the field of view filed, then calculated the corresponding tform matrix, using the transform matrix to make the input image and reference image in a common coordination system. I also left some 'row_tol' to deal with the boundary case and 'tol' to use for alignment in "compute_unalign_loss". But I really cann't get the average psnr 26.88 in your paper. When I tested, the rgb patch size was 512, the following is a result(bicubic(left) and the model you released(right)):
/ZoomDataset/ZoomTestset/00007/00002.ARW: 12.785078, 18.303590
/ZoomDataset/ZoomTestset/00022/00002.ARW: 14.515143, 16.839341
/ZoomDataset/ZoomTestset/00024/00002.ARW: 12.913928, 21.346897
/ZoomDataset/ZoomTestset/00038/00002.ARW: 9.775525, 14.093602
/ZoomDataset/ZoomTestset/00039/00002.ARW: 10.085894, 17.137758
/ZoomDataset/ZoomTestset/00041/00002.ARW: 15.232382, 18.384796
/ZoomDataset/ZoomTestset/00053/00002.ARW: 11.555653, 17.822079
/ZoomDataset/ZoomTestset/00062/00002.ARW: 7.930915, 13.851149
/ZoomDataset/ZoomTestset/00070/00002.ARW: 9.524115, 15.228673
/ZoomDataset/ZoomTestset/00077/00002.ARW: 11.811259, 20.296578
/ZoomDataset/ZoomTestset/00085/00002.ARW: 10.942207, 14.176942
/ZoomDataset/ZoomTestset/00090/00002.ARW: 9.620262, 14.226431
/ZoomDataset/ZoomTestset/00091/00002.ARW: 11.455113, 15.849648
/ZoomDataset/ZoomTestset/00098/00002.ARW: 14.174250, 17.691472
/ZoomDataset/ZoomTestset/00101/00002.ARW: 8.668862, 14.501641
/ZoomDataset/ZoomTestset/00127/00002.ARW: 8.407202, 14.827260
/ZoomDataset/ZoomTestset/00131/00002.ARW: 10.309193, 18.304821
/ZoomDataset/ZoomTestset/00134/00002.ARW: 9.177653, 14.567726
/ZoomDataset/ZoomTestset/00135/00002.ARW: 11.624068, 16.483066
/ZoomDataset/ZoomTestset/00143/00002.ARW: 12.240002, 19.129595
/ZoomDataset/ZoomTestset/00153/00002.ARW: 13.110019, 21.511015
/ZoomDataset/ZoomTestset/00158/00002.ARW: 11.415764, 22.137255
/ZoomDataset/ZoomTestset/00163/00002.ARW: 14.180664, 19.609866
/ZoomDataset/ZoomTestset/00167/00002.ARW: 11.914188, 15.180804
/ZoomDataset/ZoomTestset/00169/00002.ARW: 12.302811, 15.980301
/ZoomDataset/ZoomTestset/00174/00002.ARW: 12.488629, 20.293079
/ZoomDataset/ZoomTestset/00179/00002.ARW: 13.240712, 17.063501
/ZoomDataset/ZoomTestset/00182/00002.ARW: 13.271000, 19.620075
/ZoomDataset/ZoomTestset/00186/00002.ARW: 10.279702, 17.315246
/ZoomDataset/ZoomTestset/00189/00002.ARW: 10.745554, 19.650331
/ZoomDataset/ZoomTestset/00192/00002.ARW: 10.516796, 19.469927
/ZoomDataset/ZoomTestset/00226/00002.ARW: 10.208805, 14.128721
/ZoomDataset/ZoomTestset/00232/00002.ARW: 10.010175, 14.531946
/ZoomDataset/ZoomTestset/00245/00002.ARW: 14.007477, 23.403203
/ZoomDataset/ZoomTestset/00247/00002.ARW: 10.576971, 17.488710
/ZoomDataset/ZoomTestset/00253/00002.ARW: 13.141019, 15.659888
/ZoomDataset/ZoomTestset/00261/00002.ARW: 15.938207, 20.207385
/ZoomDataset/ZoomTestset/00286/00002.ARW: 13.259017, 21.091941
/ZoomDataset/ZoomTestset/00288/00002.ARW: 16.798386, 22.747412
/ZoomDataset/ZoomTestset/00293/00002.ARW: 14.841238, 22.192923
/ZoomDataset/ZoomTestset/00323/00002.ARW: 10.406019, 11.464909
/ZoomDataset/ZoomTestset/00332/00002.ARW: 11.871990, 16.562447
/ZoomDataset/ZoomTestset/00341/00002.ARW: 19.892195, 21.141656
/ZoomDataset/ZoomTestset/00345/00002.ARW: 10.640870, 26.939078
/ZoomDataset/ZoomTestset/00349/00002.ARW: 12.010275, 19.480980
/ZoomDataset/ZoomTestset/00352/00002.ARW: 13.632774, 21.194607
/ZoomDataset/ZoomTestset/00360/00002.ARW: 11.487250, 22.951196
/ZoomDataset/ZoomTestset/00366/00002.ARW: 11.752497, 25.129250
/ZoomDataset/ZoomTestset/00370/00002.ARW: 13.336823, 17.839531
/ZoomDataset/ZoomTestset/00434/00002.ARW: 12.413606, 19.689393
Mean :12.048803, 18.294793
I have tried so many times, and I guessed maybe in which step I was wrong.
from zoom-learn-zoom.
Are you using 00002.ARW as reference? It's shot with 35mm but input is 00007.ARW shot with 240mm. 4X should be compared against 00003.ARW
from zoom-learn-zoom.
em... I also tested the input is 00007.ARW and reference is 00003.jpg, the results made no difference. And I scrutinized the output and the matched rgb patch(the translated ouput in compute_unalign_loss) , some images aligned well, but most not. No matter what the tol was set, it didn't work. So I just wondered is there any trick ? The following is the results when input is 00007.arw, reference is 00003.jpg:
/ZoomDataset/ZoomTestset/00007/00003.ARW: 12.032349, 18.491868
/ZoomDataset/ZoomTestset/00038/00003.ARW: 8.432056, 13.745029
/ZoomDataset/ZoomTestset/00039/00003.ARW: 12.175751, 16.047565
/ZoomDataset/ZoomTestset/00041/00003.ARW: 11.978979, 19.723395
/ZoomDataset/ZoomTestset/00053/00003.ARW: 9.277274, 13.917745
/ZoomDataset/ZoomTestset/00062/00003.ARW: 10.398028, 17.688646
/ZoomDataset/ZoomTestset/00070/00003.ARW: 9.822171, 15.094861
/ZoomDataset/ZoomTestset/00077/00003.ARW: 9.281354, 15.155921
/ZoomDataset/ZoomTestset/00085/00003.ARW: 12.899477, 16.153470
/ZoomDataset/ZoomTestset/00090/00003.ARW: 11.259477, 17.373516
/ZoomDataset/ZoomTestset/00091/00003.ARW: 11.822886, 16.180207
/ZoomDataset/ZoomTestset/00098/00003.ARW: 11.638027, 16.534200
/ZoomDataset/ZoomTestset/00101/00003.ARW: 8.549746, 14.172190
/ZoomDataset/ZoomTestset/00127/00003.ARW: 10.239830, 15.621939
/ZoomDataset/ZoomTestset/00131/00003.ARW: 12.698572, 16.968826
/ZoomDataset/ZoomTestset/00134/00003.ARW: 8.092343, 14.191878
/ZoomDataset/ZoomTestset/00135/00003.ARW: 7.430301, 14.234161
/ZoomDataset/ZoomTestset/00143/00003.ARW: 9.433518, 16.732820
/ZoomDataset/ZoomTestset/00153/00003.ARW: 8.799270, 17.369007
/ZoomDataset/ZoomTestset/00158/00003.ARW: 10.376176, 15.042353
/ZoomDataset/ZoomTestset/00163/00003.ARW: 13.897687, 18.627659
/ZoomDataset/ZoomTestset/00167/00003.ARW: 11.949355, 17.238464
/ZoomDataset/ZoomTestset/00169/00003.ARW: 11.802769, 17.259167
/ZoomDataset/ZoomTestset/00174/00003.ARW: 13.657629, 21.206627
/ZoomDataset/ZoomTestset/00179/00003.ARW: 13.737702, 16.044008
/ZoomDataset/ZoomTestset/00182/00003.ARW: 15.116275, 16.829180
/ZoomDataset/ZoomTestset/00186/00003.ARW: 8.625357, 13.881788
/ZoomDataset/ZoomTestset/00189/00003.ARW: 9.531015, 17.233171
/ZoomDataset/ZoomTestset/00192/00003.ARW: 10.920785, 18.816498
/ZoomDataset/ZoomTestset/00226/00003.ARW: 9.648585, 13.013804
/ZoomDataset/ZoomTestset/00232/00003.ARW: 11.316306, 15.007057
/ZoomDataset/ZoomTestset/00245/00003.ARW: 13.435781, 18.823359
/ZoomDataset/ZoomTestset/00247/00003.ARW: 10.682814, 17.235027
/ZoomDataset/ZoomTestset/00253/00003.ARW: 13.397809, 16.462197
/ZoomDataset/ZoomTestset/00261/00003.ARW: 14.297718, 17.127410
/ZoomDataset/ZoomTestset/00286/00003.ARW: 14.402463, 22.277693
/ZoomDataset/ZoomTestset/00288/00003.ARW: 17.126287, 22.824205
/ZoomDataset/ZoomTestset/00293/00003.ARW: 16.348606, 21.603735
/ZoomDataset/ZoomTestset/00323/00003.ARW: 10.908654, 13.373333
/ZoomDataset/ZoomTestset/00332/00003.ARW: 11.797874, 17.534843
/ZoomDataset/ZoomTestset/00341/00003.ARW: 20.092554, 19.145400
/ZoomDataset/ZoomTestset/00345/00003.ARW: 10.951042, 22.468463
/ZoomDataset/ZoomTestset/00349/00003.ARW: 12.426643, 18.370668
/ZoomDataset/ZoomTestset/00352/00003.ARW: 12.983976, 19.492224
/ZoomDataset/ZoomTestset/00360/00003.ARW: 10.334180, 15.857438
/ZoomDataset/ZoomTestset/00366/00003.ARW: 11.364361, 25.290092
/ZoomDataset/ZoomTestset/00370/00003.ARW: 12.571019, 14.020080
/ZoomDataset/ZoomTestset/00434/00003.ARW: 13.579210, 17.139367
test mean: 11.740459, 17.180053
So what caused this problem? I found even some reference patch(including the 'tol' region) didn't include the entire 'output' region. So in this situation, it's impossible to find the corresponding match region using 'compute_unalign_loss'. Maybe some problem in the tform matrix which is used for transforming 00003.jpg to the desired (00007)'s scale x4 image ? But using the same algorithm to calculate the tform matrix, some images fits well, some not. I am really get confused...
from zoom-learn-zoom.
It doesn't really make sense to me if comparing with 00003 gives worse result than comparing against 00002, and your naive bicubic results do not match neither.
I might also provide the eval code together with the training code, but this would happen later when I get back from my current internship and project.
from zoom-learn-zoom.
Ok. Thanks. And when the reference is 00002.jpg, the input is 00006.ARW. The ID interval I set is 4.
Let me review the data preprocess again.
from zoom-learn-zoom.
do you train the model ?
i dont use compute_unalign_loss() when i am training ,what it is used for?
i train the model with compute_patch_contextual_loss() and compute_contextual_loss()
from zoom-learn-zoom.
I think compute_unalign_loss is used for finding the best match between the output and label
from zoom-learn-zoom.
Related Issues (20)
- the sky is dirty issue HOT 1
- The value of white_lv HOT 2
- How to use tform.txt and wb.txt during training? HOT 1
- Have you aligned the train dataset ? HOT 1
- about train code
- Dear author: what's the details of 'scale offset'?
- Can you please provide the train code?
- Seems like there existed obvious errors in the utils.py HOT 1
- what the function get_transformed_corner() mean? HOT 1
- can you provide the train code
- About the ratio of SR-RAW dataset
- How can we test your provided model using PNG file? OR how can we convert .PNG image to .ARW?
- Dataset and code licence
- About PSNR, SSIM and LPIPS on the released model HOT 2
- iPhoneX-DSLR data
- Generalization and calculate PSNR
- Queries about the dataset
- Can't make inference work on CR2 or DNG raw data
- Can you please provide the crop information for aligning the field of view of the different images?
- About spatial loss calculation is wrong?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from zoom-learn-zoom.