This is the official version of paper "FPM-TRNet: Fused Photoacoustic and Operating Microscopic Imaging with Cross-modality Transform and Registration Network"
The proposed method takes the paired PAM and RGB images as input and predicts the correspondence which is utilized to obtain the final fused image as output. The proposed method contains two subnetworks, i.e., MOTNet: Modality Transform Network and HIRNet: Hierarchical Iterative Registration Network. The MOTNet takes the input images and extracts the modality maps which contain the unified representation of vessels and remove background noise. The HIRNet estimates the correspondence based on modality maps in a coarse-to-fine manner.
To evaluate the performance of our proposed method, we propose two datasets for quantitative evaluation. The proposed synthetic and in vivo datasets will be available upon request.
The codes will be available in the future step by step.
Supplementary.Material.mp4
Please cite our work if you find this work useful for your research.
@article{Liufpmtrnet2024,
author = {Yuxuan Liu, Jiasheng Zhou, Yating Luo, Sung-Liang Chen, Yao Guo and Guang-Zhong Yang},
title = {FPM-TRNet: Fused Photoacoustic and Operating Microscopic Imaging with Cross-modality Transform and Registration Network},
year = {},
month = {},
url = {},
doi = {},
}