Comments (6)
Should I download the preprocessed data? such as, 100307_3T_Structural_preproc.zip/T1w/T1w_acpc_dc_restore_brain.nii.gz. Thanks!
from arssr.
Is your problem solved? Can you tell me how to get the test set?
from arssr.
any solutions for the above questions?
for the pre-processing I can see that they have provided the data_bulid.py.
from arssr.
Should I download the preprocessed data? such as, 100307_3T_Structural_preproc.zip/T1w/T1w_acpc_dc_restore_brain.nii.gz. Thanks!
Yeah, the HCP-1200 dataset we download is pre-processed. We use a Python script (https://github.com/huawei-lin/HCP_Dataset_Download_Automatically_Script) to perform downloading. To download the image modality we want (e.g., T1w in our paper), we modify the lines 39-51 in this script as below:
Original (https://github.com/huawei-lin/HCP_Dataset_Download_Automatically_Script/blob/master/script.py)
keyList = bucket.objects.filter(Prefix = prefix + '/{}/MNINonLinear/Results/tfMRI'.format(subject_number))
keyList = [key.key for key in keyList]
keyList = [x for x in keyList if '_LR.nii.gz' in x or '_RL.nii.gz' in x or 'EVs' in x]
if not os.path.exists(output_path):
os.makedirs(output_path)
totalNumber = len(keyList)
trycnt = 0
tempKeys = [x for x in keyList if '_LR.nii.gz' in x or '_RL.nii.gz' in x]
Modified
keyList = bucket.objects.filter(Prefix = prefix + '/{}/MNINonLinear'.format(subject_number))
keyList = [key.key for key in keyList]
keyList = [x for x in keyList if 'T1w_restore_brain.nii.gz' in x]
if not os.path.exists(output_path):
os.makedirs(output_path)
totalNumber = len(keyList)
trycnt = 0
tempKeys = [x for x in keyList if 'T1w_restore_brain.nii.gz' in x]
After the downloading, we only conduct intensity normalization of [0, 1]. Thanks for your attention!
from arssr.
Second one, when I try to reproduce this experiment, there are some questions about quality metrics, LPIPS, PSI,LPC-SI. How to use the slice-by-slice strategy to compute them? Would mind sharing the code about this partment?
The three metrics are employed from their official implementations, their code links are:
LPIPS:https://github.com/richzhang/PerceptualSimilarity
PSI:https://github.com/feichtenhofer/PSI
LPC-SI:https://ece.uwaterloo.ca/~z70wang/research/lpcsi/
Moreover, the slice-by-slice strategy includes three steps:
- Given a 3D volume, we extract its 2D MR slices from three orthogonal directions (axial, sagittal, and coronal directions).
- We compute the scores for each 2D MR slice.
- We average the scores of all the 2D MR slices to calculate the final scores.
The following code is to compute LPIPS using the slice-by-slice strategy:
import SimpleITK as sitk
import numpy as np
import torchs
import lpips
def compute_lpips(gt_slice, recon_slice):
lpips_alex = lpips.LPIPS(net = 'alex')
h, w = gt_slice.shape
gt_image = np.zeros((1, 3, h, w))
recon_image = np.zeros((1, 3, h, w))
for c in range(3):
gt_image[:, c, :, :] = gt_slice
recon_image[:, c, :, :] = recon_slice
lpips_value = lpips_alex(torch.tensor(gt_image).to(torch.float32), torch.tensor(recon_image).to(torch.float32))
return lpips_value
gt_volume_path = ''
recon_volume_path = ''
gt_volume = sitk.GetArrayFromImage(sitk.ReadImage(gt_volume_path))
recon_volume = sitk.GetArrayFromImage(sitk.ReadImage(recon_volume_path))
# In any volume, the center region often includes more image information than the margin region.
# Therefore, we extract 10 2D slices in the center from three directions, respectively.
# Here the volumes are 264*264*264 size
i_strart, i_end = 127, 137
lpips_value = 0.
for i in range(i_strart, i_end):
# direction 1
gt_slice = gt_volume[i, :, :]
recon_slice = recon_volume[i, :, :]
lpips_value += compute_lpips(gt_slice, recon_slice)
# direction 2
gt_slice = gt_volume[:, i, :]
recon_slice = recon_volume[:, i, :]
lpips_value += compute_lpips(gt_slice, recon_slice)
# direction 3
gt_slice = gt_volume[:, :, i]
recon_slice = recon_volume[:, :, i]
lpips_value += compute_lpips(gt_slice, recon_slice)
lpips_value = lpips_value / 30
print('LPIPS:', lpips_value)
The rest two metrics (PSI and LPC-SI) are also computed by using similar coding in MATLAB
from arssr.
Thanks, I will have a try!
from arssr.
Related Issues (15)
- 能否提供一下测试数据集 HOT 25
- 评价指标的代码 HOT 6
- 关于测试集下采样的问题 HOT 2
- some questions
- 你好,如何使用ANTs得到下面的这样的分割图? HOT 4
- 自动分割的问题 HOT 16
- Wrong command in README HOT 1
- training loss HOT 11
- HCP_1200 T1w原始数据 HOT 1
- HCP_1200 t1w 原始数据 HOT 1
- Question about evaluation metric calculation HOT 2
- Which HCP-1200 data set of skull dissection was used in the paper? HOT 1
- About inference time
- Unexpected key(s) in state_dict
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from arssr.