Giter Site home page Giter Site logo

Comments (33)

jamesfcavanagh avatar jamesfcavanagh commented on September 28, 2024 1

Want to know the solution? It's a memory issue in Docker defaults. Set it higher than the default 2GB.

from heudiconv.

mgxd avatar mgxd commented on September 28, 2024

@gmrosenb looks like the heuristic might be overwriting the first run - instead of hard coding each run individually, you can use the {item} field within your heuristic template to automatically generate a new file for each run. To see an example, you can take a look at some of the sample heuristics

from heudiconv.

gmrosenb avatar gmrosenb commented on September 28, 2024

@mgxd thank you for your response.

I tried using the {item} field for each run. Now, it when I run heudiconv with dcm2niix, some (but not all) of the dicoms are converted. Interestingly, the auto.txt file and edit.txt file in the hidden directory seem to identify all of the scans correctly. However, as you can see from the scans.tsv file, which lists all of the processed files, you can see that not all of the scans are processed.

I cannot find a pattern in which files are converted and which aren't.

The body of the heuristic file, command line input and output, and the text from edit.txt and scans.tsv are all below. Please let me know if you have any further insights.


Heuristic file:

#!/usr/bin/env python2
import os

def create_key(template, outtype=('nii.gz','dicom'), annotation_classes=None):
    if template is None or not template:
        raise ValueError('Template must be a valid format string')
    return (template, outtype, annotation_classes)


def infotodict(seqinfo):
    """Heuristic evaluator for determining which runs belong where
    allowed template fields - follow python string module:
    item: index within category
    subject: participant id
    seqitem: run number during scanning
    subindex: sub index within group
    """
    loc = create_key('anat/sub-{subject}_loc')
    fmap = create_key('fmap/sub-{subject}_dir-{dir}_epi')
    mbmf1_sbref = create_key('func/sub-{subject}_task-mbmf_run-1_sbref')
    mbmf1 = create_key('func/sub-{subject}_task-mbmf_run-1_bold')
    mbmf2_sbref = create_key('func/sub-{subject}_task-mbmf_run-2_sbref')
    mbmf2 = create_key('func/sub-{subject}_task-mbmf_run-2_bold')
    mbmf3_sbref = create_key('func/sub-{subject}_task-mbmf_run-3_sbref')
    mbmf3 = create_key('func/sub-{subject}_task-mbmf_run-3_bold')
    t1w = create_key('anat/sub-{subject}_T1w')
    t2w = create_key('anat/sub-{subject}_T2w')    
    rest_sbref = create_key('func/sub-{subject}_task-rest_sbref')
    rest = create_key('func/sub-{subject}_task-rest_bold')
    dwi = create_key('dwi/sub-{subject}_dir-{dir}_dwi')
    
    info = {loc: [], fmap: [], mbmf1_sbref: [], mbmf1: [], mbmf2_sbref: [], 
            mbmf2: [], mbmf3_sbref: [], mbmf3: [], t1w: [], t2w: [], 
            rest_sbref: [], rest: [], dwi: []}
    
   
    for idx, s in enumerate(seqinfo):
        if ('Localizer (32 Ch + Physio )' == s.series_description):
            info[loc].append({'item': s.series_id})
        elif ('SpinEchoFieldMap_AP' == s.series_description):
            info[fmap].append({'item': s.series_id, 'dir': 'AP'})
        elif ('SpinEchoFieldMap_PA' == s.series_description):
            info[fmap].append({'item': s.series_id, 'dir': 'PA'})
        elif ('MBMF_tfMRI-640TR_Run1_SBRef' == s.series_description):
            info[mbmf1_sbref].append({'item': s.series_id})
        elif ('MBMF_tfMRI-640TR_Run2_SBRef' == s.series_description):
            info[mbmf2_sbref].append({'item': s.series_id})    
        elif ('MBMF_tfMRI-640TR_Run3_SBRef' == s.series_description):
            info[mbmf3_sbref].append({'item': s.series_id})    
        elif ('T1w_MPR1_AP' == s.series_description):
            info[t1w].append({'item': s.series_id})    
        elif ('T2w_SPC1_AP' == s.series_description):
            info[t2w].append({'item': s.series_id})    
        elif ('DTI-2mm_MB3_LR' == s.series_description): 
            info[dwi].append({'item': s.series_id,'dir': 'LR'}) 
        elif ('DTI-2mm_MB3_RL' == s.series_description): 
            info[dwi].append({'item': s.series_id,'dir': 'RL'}) 
        elif ('rfMRI_REST_AP' == s.series_description) and not ('NORM' in s.image_type):###### 'NORM' isn't in the image_type, then it's the correct version
            info[rest].append({'item': s.series_id})  
        elif ('rfMRI_REST_AP_SBRef' == s.series_description) and not ('NORM' in s.image_type):##### 'NORM' isn't in the image_type, then it's the correct version
            info[rest_sbref].append({'item': s.series_id})
        elif ('MBMF_tfMRI-640TR_Run1' == s.series_description):
            info[mbmf1].append({'item': s.series_id})        
        elif ('MBMF_tfMRI-640TR_Run2' == s.series_description):
            info[mbmf2].append({'item': s.series_id})    
        elif ('MBMF_tfMRI-640TR_Run3' == s.series_description):
            info[mbmf3].append({'item': s.series_id})    
        else:
            pass        
    return info

Command line input and output:

172-17-57-93:data2 gailrosenbaum$ docker run --rm -it -v $PWD:/data2 nipy/heudiconv -d /data2/{subject}/raw/*.IMA -s SC02162 -f /data2/MBMF_heuristic12.py -c dcm2niix -b -o /data2/SC02162/output7/ INFO: Need to process 1 study sessions INFO: PROCESSING STARTS: {'session': None, 'outdir': '/data2/SC02162/output7/', 'subject': 'SC02162'} INFO: Processing 5602 dicoms INFO: Analyzing 5602 dicoms INFO: Generated sequence info with 29 entries INFO: Doing conversion using dcm2niix INFO: Converting /data2/SC02162/output7/anat/sub-SC02162_T1w (208 DICOMs) -> /data2/SC02162/output7/anat . Converter: dcm2niix . Output types: ('nii.gz', 'dicom') INFO: Executing node convert in dir: /tmp/heudiconvdcm8J1iMW/convert INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f anat -o /tmp/heudiconvdcm8J1iMW/convert -s n -v n /tmp/heudiconvdcm8J1iMW/convert/SC02162.MR.SACKLER_SI-CH.0010.0001.2015.10.07.16.20.19.671875.4457721.IMA INFO: Executing node embedder in dir: /tmp/heudiconvdcm8J1iMW/embedder INFO: Post-treating /data2/SC02162/output7/anat/sub-SC02162_T1w.json file INFO: Converting /data2/SC02162/output7/func/sub-SC02162_task-mbmf_run-2_sbref (1 DICOMs) -> /data2/SC02162/output7/func . Converter: dcm2niix . Output types: ('nii.gz', 'dicom') INFO: Executing node convert in dir: /tmp/heudiconvdcm8J1iMW/convert INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f func -o /tmp/heudiconvdcm8J1iMW/convert -s n -v n /tmp/heudiconvdcm8J1iMW/convert/SC02162.MR.SACKLER_SI-CH.0006.0001.2015.10.07.16.20.19.671875.4433141.IMA INFO: Executing node embedder in dir: /tmp/heudiconvdcm8J1iMW/embedder INFO: Post-treating /data2/SC02162/output7/func/sub-SC02162_task-mbmf_run-2_sbref.json file INFO: Converting /data2/SC02162/output7/anat/sub-SC02162_T2w (208 DICOMs) -> /data2/SC02162/output7/anat . Converter: dcm2niix . Output types: ('nii.gz', 'dicom') INFO: Executing node convert in dir: /tmp/heudiconvdcm8J1iMW/convert INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f anat -o /tmp/heudiconvdcm8J1iMW/convert -s n -v n /tmp/heudiconvdcm8J1iMW/convert/SC02162.MR.SACKLER_SI-CH.0011.0001.2015.10.07.16.20.19.671875.4461819.IMA INFO: Executing node embedder in dir: /tmp/heudiconvdcm8J1iMW/embedder INFO: Post-treating /data2/SC02162/output7/anat/sub-SC02162_T2w.json file INFO: Converting /data2/SC02162/output7/func/sub-SC02162_task-mbmf_run-3_sbref (1 DICOMs) -> /data2/SC02162/output7/func . Converter: dcm2niix . Output types: ('nii.gz', 'dicom') INFO: Executing node convert in dir: /tmp/heudiconvdcm8J1iMW/convert INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f func -o /tmp/heudiconvdcm8J1iMW/convert -s n -v n /tmp/heudiconvdcm8J1iMW/convert/SC02162.MR.SACKLER_SI-CH.0008.0001.2015.10.07.16.20.19.671875.4445431.IMA INFO: Executing node embedder in dir: /tmp/heudiconvdcm8J1iMW/embedder INFO: Post-treating /data2/SC02162/output7/func/sub-SC02162_task-mbmf_run-3_sbref.json file INFO: Converting /data2/SC02162/output7/func/sub-SC02162_task-rest_sbref (1 DICOMs) -> /data2/SC02162/output7/func . Converter: dcm2niix . Output types: ('nii.gz', 'dicom') INFO: Executing node convert in dir: /tmp/heudiconvdcm8J1iMW/convert INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f func -o /tmp/heudiconvdcm8J1iMW/convert -s n -v n /tmp/heudiconvdcm8J1iMW/convert/SC02162.MR.SACKLER_SI-CH.0012.0001.2015.10.07.16.20.19.671875.4465917.IMA INFO: Executing node embedder in dir: /tmp/heudiconvdcm8J1iMW/embedder INFO: Post-treating /data2/SC02162/output7/func/sub-SC02162_task-rest_sbref.json file INFO: Converting /data2/SC02162/output7/func/sub-SC02162_task-mbmf_run-2_bold (640 DICOMs) -> /data2/SC02162/output7/func . Converter: dcm2niix . Output types: ('nii.gz', 'dicom') INFO: Executing node convert in dir: /tmp/heudiconvdcm8J1iMW/convert INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f func -o /tmp/heudiconvdcm8J1iMW/convert -s n -v n /tmp/heudiconvdcm8J1iMW/convert/SC02162.MR.SACKLER_SI-CH.0007.0001.2015.10.07.16.20.19.671875.4433174.IMA INFO: Executing node embedder in dir: /tmp/heudiconvdcm8J1iMW/embedder

Text from the subject's "edit.txt" output file:

{('anat/sub-{subject}_T1w', ('nii.gz', 'dicom'), None): [{'item': '10-T1w_MPR1_AP'}], ('anat/sub-{subject}_T2w', ('nii.gz', 'dicom'), None): [{'item': '11-T2w_SPC1_AP'}], ('anat/sub-{subject}_loc', ('nii.gz', 'dicom'), None): [{'item': '1-Localizer (32 Ch + Physio )'}], ('dwi/sub-{subject}_dir-{dir}_dwi', ('nii.gz', 'dicom'), None): [{'dir': 'RL', 'item': '20-DTI-2mm_MB3_RL'}, {'dir': 'LR', 'item': '26-DTI-2mm_MB3_LR'}], ('fmap/sub-{subject}_dir-{dir}_epi', ('nii.gz', 'dicom'), None): [{'dir': 'AP', 'item': '2-SpinEchoFieldMap_AP'}, {'dir': 'PA', 'item': '3-SpinEchoFieldMap_PA'}], ('func/sub-{subject}_task-mbmf_run-1_bold', ('nii.gz', 'dicom'), None): [{'item': '5-MBMF_tfMRI-640TR_Run1'}], ('func/sub-{subject}_task-mbmf_run-1_sbref', ('nii.gz', 'dicom'), None): [{'item': '4-MBMF_tfMRI-640TR_Run1'}], ('func/sub-{subject}_task-mbmf_run-2_bold', ('nii.gz', 'dicom'), None): [{'item': '7-MBMF_tfMRI-640TR_Run2'}], ('func/sub-{subject}_task-mbmf_run-2_sbref', ('nii.gz', 'dicom'), None): [{'item': '6-MBMF_tfMRI-640TR_Run2'}], ('func/sub-{subject}_task-mbmf_run-3_bold', ('nii.gz', 'dicom'), None): [{'item': '9-MBMF_tfMRI-640TR_Run3'}], ('func/sub-{subject}_task-mbmf_run-3_sbref', ('nii.gz', 'dicom'), None): [{'item': '8-MBMF_tfMRI-640TR_Run3'}], ('func/sub-{subject}_task-rest_bold', ('nii.gz', 'dicom'), None): [{'item': '14-rfMRI_REST_AP'}], ('func/sub-{subject}_task-rest_sbref', ('nii.gz', 'dicom'), None): [{'item': '12-rfMRI_REST_AP'}]}

Text from the subject's "_scans.tsv" file:

filename acq_time operator randstr anat/sub-SC02162_T1w.nii.gz 2015-10-07T15:39:03 _XP.VvM
anat/sub-SC02162_T2w.nii.gz 2015-10-07T15:39:03 "gX>_""U-B"
func/sub-SC02162_task-mbmf_run-2_bold.nii.gz 2015-10-07T15:39:03 "lvi;80""p"
func/sub-SC02162_task-mbmf_run-2_sbref.nii.gz 2015-10-07T15:39:03 ZN2[.sc~
func/sub-SC02162_task-mbmf_run-3_sbref.nii.gz 2015-10-07T15:39:03 "}T""+8x*!"
func/sub-SC02162_task-rest_sbref.nii.gz 2015-10-07T15:39:03 5rp:k!M@`

from heudiconv.

mgxd avatar mgxd commented on September 28, 2024

@gmrosenb I meant replacing the hardcoded runs with an {item} substitution when declaring each series' list. Also, I'd recommend using another field provided in dicominfo to ensure each conditional will be unique.

Can you try the following:

remove all the mbmf_runX variables and define one list for task_bold/task_sbref:

mbmf_sbref = create_key('func/sub-{subject}_task-mbmf_run-{item}_sbref')
mbmf = create_key('func/sub-{subject}_task-mbmf_run-{item}_bold')

then, while looping through seqinfo

if 'MBMF_tfMRI' in s.sequence_description:
    if s.dim4 == 640:
        info[mbmf].append({'item': s.series_id})
    elif s.dim4 == 1:
        info[mbmf_sbref].append({'item': s.series_id})

from heudiconv.

gmrosenb avatar gmrosenb commented on September 28, 2024

@mgxd Thank you for your quick response. I tried what you suggested, with a single task and task_sbref key for each run, and adding multiple conditional statements for each scan (although, if the edit.txt file is displaying the run information correctly for this participant, and all of my participants' scans are named in the same way, shouldn't the one condition be enough?). In any case, the text from my new heuristic file is below.

This time, heudiconv only ran dcm2niix on one session (the first task run - mbmf). As in my previous attempts, the edit.txt file displays all of the runs, but the scans.tsv file only shows that one run was converted (both are also pasted below, in case they're helpful).

Any thoughts? Are my heuristic file changes, in fact, consistent with your suggestions?

heuristic file text:

import os

def create_key(template, outtype=('nii.gz','dicom'), annotation_classes=None):
    if template is None or not template:
        raise ValueError('Template must be a valid format string')
    return (template, outtype, annotation_classes)


def infotodict(seqinfo):
    """Heuristic evaluator for determining which runs belong where
    allowed template fields - follow python string module:
    item: index within category
    subject: participant id
    seqitem: run number during scanning
    subindex: sub index within group
    """
    loc = create_key('anat/sub-{subject}_loc')
    fmap = create_key('fmap/sub-{subject}_dir-{dir}_epi')
    mbmf_sbref = create_key('func/sub-{subject}_task-mbmf_run-{item}_sbref')
    mbmf = create_key('func/sub-{subject}_task-mbmf_run-{item}_bold')
    t1w = create_key('anat/sub-{subject}_T1w')
    t2w = create_key('anat/sub-{subject}_T2w')    
    rest_sbref = create_key('func/sub-{subject}_task-rest_sbref')
    rest = create_key('func/sub-{subject}_task-rest_bold')
    dwi = create_key('dwi/sub-{subject}_dir-{dir}_dwi')
    
    info = {loc: [], fmap: [], mbmf_sbref: [], mbmf: [], t1w: [], t2w: [], 
            rest_sbref: [], rest: [], dwi: []}
    
   
    for idx, s in enumerate(seqinfo):
        if ('MBMF_tfMRI' in s.series_description):
            if (s.dim4 == 1):
           #     runnum = s.protocol_name.split('n')[1] 
                info[mbmf_sbref].append({'item': s.series_id})
            elif (s.dim4 == 640):
             #   runnum = s.protocol_name.split('n')[1] 
                info[mbmf].append({'item': s.series_id})
        elif ('Localizer' in s.series_description) and (s.dim4 == 1):
            info[loc].append({'item': s.series_id})
        elif ('SpinEcho' in s.series_description) and (s.dim4 == 1):
            if ('AP' in s.series_description):
                info[fmap].append({'item': s.series_id, 'dir': 'AP'})       
            elif ('PA' in s.series_description):
                info[fmap].append({'item': s.series_id, 'dir': 'PA'}) 
        elif ('REST' in s.series_description) and not ('NORM' in s.image_type):
            if (s.dim4 == 1):
                info[rest_sbref].append({'item': s.series_id})
            elif (s.dim4 == 425):
                info[rest].append({'item': s.series_id})
        elif ('DTI' in s.series_description) and (s.dim4 == 57):
            if ('LR' in s.series_description):
                info[dwi].append({'item': s.series_id,'dir': 'LR'}) 
            elif ('RL' in s.series_description):
                info[dwi].append({'item': s.series_id,'dir': 'RL'}) 
        elif ('T1w' in s.series_description) and (s.dim4 == 1):
            info[t1w].append({'item': s.series_id})    
        elif ('T2w' in s.series_description) and (s.dim4 == 1):
            info[t2w].append({'item': s.series_id})    
        else:
            pass        
            
    return info

edit.txt output:

 {('anat/sub-{subject}_T1w', ('nii.gz', 'dicom'), None): [{'item': '10-T1w_MPR1_AP'}],
 ('anat/sub-{subject}_T2w', ('nii.gz', 'dicom'), None): [{'item': '11-T2w_SPC1_AP'}],
 ('anat/sub-{subject}_loc', ('nii.gz', 'dicom'), None): [{'item': '1-Localizer (32 Ch + Physio )'}],
 ('dwi/sub-{subject}_dir-{dir}_dwi', ('nii.gz', 'dicom'), None): [{'dir': 'RL',
                                                                   'item': '20-DTI-2mm_MB3_RL'},
                                                                  {'dir': 'LR',
                                                                   'item': '26-DTI-2mm_MB3_LR'}],
 ('fmap/sub-{subject}_dir-{dir}_epi', ('nii.gz', 'dicom'), None): [{'dir': 'AP',
                                                                    'item': '2-SpinEchoFieldMap_AP'},
                                                                   {'dir': 'PA',
                                                                    'item': '3-SpinEchoFieldMap_PA'}],
 ('func/sub-{subject}_task-mbmf_run-{item}_bold', ('nii.gz', 'dicom'), None): [{'item': '5-MBMF_tfMRI-640TR_Run1'},
                                                                               {'item': '7-MBMF_tfMRI-640TR_Run2'},
                                                                               {'item': '9-MBMF_tfMRI-640TR_Run3'}],
 ('func/sub-{subject}_task-mbmf_run-{item}_sbref', ('nii.gz', 'dicom'), None): [{'item': '4-MBMF_tfMRI-640TR_Run1'},
                                                                                {'item': '6-MBMF_tfMRI-640TR_Run2'},
                                                                                {'item': '8-MBMF_tfMRI-640TR_Run3'}],
 ('func/sub-{subject}_task-rest_bold', ('nii.gz', 'dicom'), None): [{'item': '14-rfMRI_REST_AP'},
                                                                    {'item': '18-rfMRI_REST_PA'}],
 ('func/sub-{subject}_task-rest_sbref', ('nii.gz', 'dicom'), None): [{'item': '12-rfMRI_REST_AP'},
                                                                     {'item': '16-rfMRI_REST_PA'}]}

scans.tsv output:

filename	acq_time	operator	randstr
func/sub-SC02162_task-mbmf_run-1_bold.nii.gz	2015-10-07T15:11:48		|6qj8Kp=

from heudiconv.

mgxd avatar mgxd commented on September 28, 2024

hmmm...have you tried deleting the output directory and retrying or using the --overwrite flag?

from heudiconv.

gmrosenb avatar gmrosenb commented on September 28, 2024

Each time I run heudiconv, I either delete the output directory or direct the output to a new directory.

I also just tried heudiconv using the same heuristic file on another participant's data. Once again, only a few runs were converted, although the converted runs were different than the runs that were converted for the other participant. Not sure if that gives any clues.

from heudiconv.

mgxd avatar mgxd commented on September 28, 2024

@yarikoptic have you seen this before? my guess is something is going wrong in seqinfo

@gmrosenb would it be possible to share one subject's DICOMS for debugging?

from heudiconv.

yarikoptic avatar yarikoptic commented on September 28, 2024

sorry, ... nothing immediate rings a bell, but also I didn't "absorb" all the information on what is going on here... to contribute -- I will reopen this issue since discussion is still ongoing ;)

from heudiconv.

gmrosenb avatar gmrosenb commented on September 28, 2024

The file is too big to upload directly, but here's a link to one participant's data on google drive. Let me know if that works.

FYI, before running the script, I renamed the unzipped directory SC02162 and moved all of the dicom files to the directory SC02162/raw.

Thank you!

from heudiconv.

mgxd avatar mgxd commented on September 28, 2024

hi @gmrosenb

It looked like the heuristic was the culprit - after some minor changes, everything seems to converts. I'm not entirely sure why yours wasn't working, it may require some restructuring within heudiconv.

scans.tsv

filename	acq_time	operator	randstr
anat/sub-SC02162_T1w.nii.gz	2015-10-07T15:46:08	n/a	YsXen#SF
anat/sub-SC02162_T2w.nii.gz	2015-10-07T15:52:17	n/a	?S2;0]r\
anat/sub-SC02162_loc-1.nii.gz	2015-10-07T15:06:32	n/a	guCp_{Q*
anat/sub-SC02162_loc-2.nii.gz	2015-10-07T15:06:32	n/a	)nwy@\eT
anat/sub-SC02162_loc-3.nii.gz	2015-10-07T15:06:32	n/a	XQ<RztHG
dwi/sub-SC02162_dir-LR_dwi.nii.gz	2015-10-07T16:13:06	n/a	d^mUGv!J
dwi/sub-SC02162_dir-RL_dwi.nii.gz	2015-10-07T16:08:37	n/a	lywTt_i^
fmap/sub-SC02162_dir-AP_epi.nii.gz	2015-10-07T15:08:07	n/a	*K<3l=fj
fmap/sub-SC02162_dir-PA_epi.nii.gz	2015-10-07T15:08:36	n/a	"W!YwV""t3"
func/sub-SC02162_task-mbmf_run-1_bold.nii.gz	2015-10-07T15:12:28	n/a	"|_"">wYjK"
func/sub-SC02162_task-mbmf_run-1_sbref.nii.gz	2015-10-07T15:12:28	n/a	/+XVEk*(
func/sub-SC02162_task-mbmf_run-2_bold.nii.gz	2015-10-07T15:22:17	n/a	Y@ILF}e~
func/sub-SC02162_task-mbmf_run-2_sbref.nii.gz	2015-10-07T15:22:17	n/a	"[q6""}P+E"
func/sub-SC02162_task-mbmf_run-3_bold.nii.gz	2015-10-07T15:32:06	n/a	Ov}.+w1{
func/sub-SC02162_task-mbmf_run-3_sbref.nii.gz	2015-10-07T15:32:06	n/a	:7(dQk#S
func/sub-SC02162_task-rest_run-1_bold.nii.gz	2015-10-07T15:54:06	n/a	"n{F/aNc"""
func/sub-SC02162_task-rest_run-1_sbref.nii.gz	2015-10-07T15:54:06	n/a	p:!*lDr3
func/sub-SC02162_task-rest_run-2_bold.nii.gz	2015-10-07T16:01:08	n/a	"iGJPo""lu"
func/sub-SC02162_task-rest_run-2_sbref.nii.gz	2015-10-07T16:01:08	n/a	P6\BE[/{

heuristic.py

import os

def create_key(template, outtype=('nii.gz', 'dicom'), annotation_classes=None):
    if template is None or not template:
        raise ValueError('Template must be a valid format string')
    return (template, outtype, annotation_classes)


def infotodict(seqinfo):
    """Heuristic evaluator for determining which runs belong where
    allowed template fields - follow python string module:
    item: index within category
    subject: participant id
    seqitem: run number during scanning
    subindex: sub index within group
    """
    loc = create_key('anat/sub-{subject}_loc')
    fmap = create_key('fmap/sub-{subject}_dir-{dir}_epi')
    mbmf_sbref = create_key('func/sub-{subject}_task-mbmf_run-{item}_sbref')
    mbmf = create_key('func/sub-{subject}_task-mbmf_run-{item}_bold')
    t1w = create_key('anat/sub-{subject}_T1w')
    t2w = create_key('anat/sub-{subject}_T2w')    
    rest_sbref = create_key('func/sub-{subject}_task-rest_run-{item}_sbref')
    rest = create_key('func/sub-{subject}_task-rest_run-{item}_bold')
    dwi = create_key('dwi/sub-{subject}_dir-{dir}_dwi')

    info = {loc: [], fmap: [], mbmf_sbref: [], mbmf: [], t1w: [], t2w: [], 
            rest_sbref: [], rest: [], dwi: []}


    for idx, s in enumerate(seqinfo):
        if ('MBMF_tfMRI' in s.protocol_name):
            if '_SBRef' in s.series_description:
                info[mbmf_sbref].append({'item': s.series_id})
            else:
                info[mbmf].append({'item': s.series_id})
        elif ('Localizer' in s.protocol_name) and (s.dim4 == 1):
            info[loc].append({'item': s.series_id})
        elif ('SpinEcho' in s.protocol_name) and (s.dim4 == 1):
            if ('AP' in s.series_description):
                info[fmap].append({'item': s.series_id, 'dir': 'AP'})       
            elif ('PA' in s.series_description):
                info[fmap].append({'item': s.series_id, 'dir': 'PA'}) 
        elif ('REST' in s.protocol_name) and not ('NORM' in s.image_type):
            if '_SBRef' in s.series_description:
                info[rest_sbref].append({'item': s.series_id})
            else:
                info[rest].append({'item': s.series_id})
        elif ('DTI' in s.protocol_name) and (s.dim4 == 57):
            if ('LR' in s.series_description):
                info[dwi].append({'item': s.series_id,'dir': 'LR'}) 
            elif ('RL' in s.series_description):
                info[dwi].append({'item': s.series_id,'dir': 'RL'}) 
        elif ('T1w' in s.protocol_name) and (s.dim4 == 1):
            info[t1w].append({'item': s.series_id})    
        elif ('T2w' in s.protocol_name) and (s.dim4 == 1):
            info[t2w].append({'item': s.series_id})    
        else:
            pass

    return info

Could you using this heuristic to see if it works for you as well?

from heudiconv.

gmrosenb avatar gmrosenb commented on September 28, 2024

Hi @mgxd,

Interestingly (and unfortunately), this new heuristic file is not working for me. It is still only converting one scan. I tried running it on a second computer, with the same results.

I am operating on a mac; do you think that could be the issue? Ultimately, I plan to use singularity to run heudiconv on our HPC. Maybe I'll try to set that up and see if it works there, unless you have any other ideas.

Thanks for all your help!

from heudiconv.

gmrosenb avatar gmrosenb commented on September 28, 2024

@mgxd I finally tried installing heudiconv locally (without using docker or singularity) and it worked, using the heuristic file you suggested!

For the record, I also tried running heudiconv using singularity both on my computer and on our HPC cluster. I also tried running singularity two ways on the server: once using the img file I created using docker2singularity, and once using an img file from another lab who had successfully run heudiconv using both docker singularity. None of these other methods worked. I'm wondering if there's some issue with my docker download/connection? In any case, I'm glad it finally works.

Thanks for your help!

from heudiconv.

mgxd avatar mgxd commented on September 28, 2024

@gmrosenb glad you got it working - I ran it locally, but I'll test running it inside a container - we may have to update dcm2niix's version in our Dockerfile

from heudiconv.

chrisgorgo avatar chrisgorgo commented on September 28, 2024

There seems to be more people running into this: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/bids-discussion/wLq9Zu7WrU0/oMc_IC_8EAAJ

from heudiconv.

naturalcici avatar naturalcici commented on September 28, 2024

Thanks @chrisfilo , for posting my link here. Here's what I've posted on bids discussion group. Any help would be much appreciated:

Hi everyone!

I've been trying to convert my dicom files into BIDS structure using docker, but for some reason it only converts the first run of the task, and does not recognize any other runs after the first run. All of my runs have the same dimensions, which could be causing the problem. I've tried to sort them by using if statements with protocol_name or series_description, but they don't seem to fix the problem.

Here are my sources for your reference:

My run 1 & run 2 have the same dimensions: 82, 82, 56, 870.
run1 protocol name = r1_2.7iso_ap_64ch_mb8
run2 protocol name = r2_ap_64ch_mb8

Here's my heuristic file:

import os

def create_key(template, outtype=('nii.gz',), annotation_classes=None):
if template is None or not template:
raise ValueError('Template must be a valid format string')
return template, outtype, annotation_classes

def infotodict(seqinfo):

"""Heuristic evaluator for determining which runs belong where
allowed template fields - follow python string module:
item: index within category
subject: participant id
seqitem: run number during scanning
subindex: sub index within group
"""

t1 = create_key('{subject}/anat/sub-{subject}_T1')
task = create_key('{subject}/func/sub-{subject}_run-{item:02d}_task-{type}_bold')

info = {t1: [], task: []} #, sbref: [], dc: []}

for idx, s in enumerate(seqinfo):
    
    print s
    
    if (s.dim3 == 224) and (s.dim4 == 1):
        info[t1] = [s.series_id]
        
    if (s.dim4 == 870) and ('r1_2.7iso_ap_64ch_mb8' in s.protocol_name):
        info[task].append({'item': s.series_id, 'type': 'painsound'})

    if (s.dim4 == 870) and ('r2_ap_64ch_mb8' in s.protocol_name):
        info[task].append({'item': s.series_id, 'type': 'painsound'})
      
return info

When I run this script, it initially prints information of all the runs, but stops after the following:

INFO: Doing conversion using dcm2niix
INFO: Converting /data/output/WCW_session1/anat/sub-WCW_session1_T1 (224 DICOMs) -> /data/output/WCW_session1/anat . Converter: dcm2niix . Output types: ('nii.gz',)
INFO: Executing node convert in dir: /tmp/heudiconvdcmPcr9b2/convert
INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f anat -o /tmp/heudiconvdcmPcr9b2/convert -s n -v n /tmp/heudiconvdcmPcr9b2/convert/20170612_WCW.MR.COCOAN_FMRI_MBEPI.0015.0001.2017.06.12.16.07.00.956995.128218061.IMA
INFO: Executing node embedder in dir: /tmp/heudiconvdcmPcr9b2/embedder
INFO: Post-treating /data/output/WCW_session1/anat/sub-WCW_session1_T1.json file
INFO: Converting /data/output/WCW_session1/func/sub-WCW_session1_run-01_task-painsound_bold (870 DICOMs) -> /data/output/WCW_session1/func . Converter: dcm2niix . Output types: ('nii.gz',)
INFO: Executing node convert in dir: /tmp/heudiconvdcmPcr9b2/convert
INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f func -o /tmp/heudiconvdcmPcr9b2/convert -s n -v n /tmp/heudiconvdcmPcr9b2/convert/20170612_WCW.MR.COCOAN_FMRI_MBEPI.0023.0001.2017.06.12.16.07.00.956995.128501804.IMA
INFO: Executing node embedder in dir: /tmp/heudiconvdcmPcr9b2/embedder

Any guidance would be appreciated. Thanks!!

Catherine

from heudiconv.

satra avatar satra commented on September 28, 2024

@naturalcici - could you please post your heudiconv command as well?

from heudiconv.

naturalcici avatar naturalcici commented on September 28, 2024

Hi @satra , here's the command :

docker run --rm -it -v $PWD:/data nipy/heudiconv -d /data/{subject}/*/*IMA -s WCW_session1 -f /data/WCW_heuristic1.py -c dcm2niix -b -o /data/output

Thanks for your help!

Catherine

from heudiconv.

satra avatar satra commented on September 28, 2024

@naturalcici - could you please try the following:

docker run --rm -it -v $PWD:/data nipy/heudiconv -d /data/{subject}/*/*IMA \
  -s WCW_session1 -f /data/WCW_heuristic1.py -c dcm2niix -b -o /data/output --minmeta

if the above works, it will help us narrow things down to a particular library.

from heudiconv.

wanirepo avatar wanirepo commented on September 28, 2024

Hi @satra Thanks for your answer. Actually @naturalcici is working on the same dataset with me. I tried the above line (i.e., adding --minmeta option), but it didn't work. It still runs only one run, and stop.

from heudiconv.

wanirepo avatar wanirepo commented on September 28, 2024

@gmrosenb as you suggested, I wonder if you could test running the same heuristic file inside a container. I'd like to know what the problem is.. I'm running the above comment in the container, and it still doesn't work (i.e., converting only one run).

from heudiconv.

wanirepo avatar wanirepo commented on September 28, 2024

When I remove the dicom directory that is already converted into nifty, interestingly it moved to the next directory. See the following message when I left only one run after removing three runs (fyi, I had three functional runs and one structural run, and the remaining run was the structural one).

$ docker run --rm -it -v $PWD:/data nipy/heudiconv -d /data/{subject}/*/*IMA -s WCWsession1 -f /data/heuristic.py -c dcm2niix -b -o /data/output --minmeta --ses 1 --overwrite

INFO: Need to process 1 study sessions
INFO: PROCESSING STARTS: {'session': '1', 'outdir': '/data/output/', 'subject': 'WCWsession1'}
INFO: Processing 224 dicoms
INFO: Reloading existing filegroup.json because /data/output/.heudiconv/WCWsession1/ses-1/info/WCWsession1_ses-1.edit.txt exists
INFO: Doing conversion using dcm2niix

INFO: Converting /data/output/func/sub-WCWsession1_run-01_task-painsound_bold (870 DICOMs) -> /data/output/func . Converter: dcm2niix . Output types: ('nii',)
INFO: Executing node embedder in dir: /tmp/heudiconvdcmAoBaqa/embedder
ERROR: Embedding failed: [Errno 2] No such file or directory: u'/data/WCWsession1/R1_2_7ISO_AP_64CH_MB8_0023/20170612_WCW.MR.COCOAN_FMRI_MBEPI.0023.0001.2017.06.12.16.07.00.956995.128501804.IMA'
INFO: Post-treating /data/output/func/sub-WCWsession1_run-01_task-painsound_bold.json file

INFO: Converting /data/output/func/sub-WCWsession1_run-02_task-painsound_bold (870 DICOMs) -> /data/output/func . Converter: dcm2niix . Output types: ('nii',)
INFO: Executing node embedder in dir: /tmp/heudiconvdcmAoBaqa/embedder
ERROR: Embedding failed: [Errno 2] No such file or directory: u'/data/WCWsession1/R2_AP_64CH_MB8_0025/20170612_WCW.MR.COCOAN_FMRI_MBEPI.0025.0001.2017.06.12.16.07.00.956995.128572462.IMA'
INFO: Post-treating /data/output/func/sub-WCWsession1_run-02_task-painsound_bold.json file

INFO: Converting /data/output/func/sub-WCWsession1_run-03_task-painsound_bold (870 DICOMs) -> /data/output/func . Converter: dcm2niix . Output types: ('nii',)
INFO: Executing node embedder in dir: /tmp/heudiconvdcmAoBaqa/embedder
ERROR: Embedding failed: [Errno 2] No such file or directory: u'/data/WCWsession1/R3_AP_64CH_MB8_0027/20170612_WCW.MR.COCOAN_FMRI_MBEPI.0027.0001.2017.06.12.16.07.00.956995.128647216.IMA'
INFO: Post-treating /data/output/func/sub-WCWsession1_run-03_task-painsound_bold.json file

INFO: Converting /data/output/anat/sub-WCWsession1_T1 (224 DICOMs) -> /data/output/anat . Converter: dcm2niix . Output types: ('nii',)
INFO: Executing node convert in dir: /tmp/heudiconvdcmAoBaqa/convert
INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f anat -o /tmp/heudiconvdcmAoBaqa/convert -s n -v n /tmp/heudiconvdcmAoBaqa/convert/20170612_WCW.MR.COCOAN_FMRI_MBEPI.0015.0001.2017.06.12.16.07.00.956995.128218061.IMA
INFO: Executing node embedder in dir: /tmp/heudiconvdcmAoBaqa/embedder
ERROR: Embedding failed: Cannot work out file type of "/data/output/anat/sub-WCWsession1_T1.nii"
Interface Function failed to run.
INFO: Post-treating /data/output/anat/sub-WCWsession1_T1.json file
INFO: Populating template files under /data/output/
INFO: PROCESSING DONE: {'session': '1', 'outdir': '/data/output/', 'subject': 'WCWsession1'}

Without removing any directories, the following happened (only one run was converted):

$ docker run --rm -it -v $PWD:/data nipy/heudiconv -d /data/{subject}/*/*IMA -s WCWsession1 -f /data/heuristic.py -c dcm2niix -b -o /data/output --minmeta --ses 1

INFO: Need to process 1 study sessions
INFO: PROCESSING STARTS: {'session': '1', 'outdir': '/data/output/', 'subject': 'WCWsession1'}
INFO: Processing 2834 dicoms
INFO: Analyzing 2834 dicoms
INFO: Generated sequence info with 4 entries
INFO: Doing conversion using dcm2niix
INFO: Converting /data/output/func/sub-WCWsession1_run-01_task-painsound_bold (870 DICOMs) -> /data/output/func . Converter: dcm2niix . Output types: ('nii',)
INFO: Executing node convert in dir: /tmp/heudiconvdcm3uoGWV/convert
INFO: Running: dcm2niix -b y -z i -x n -t n -m n -f func -o /tmp/heudiconvdcm3uoGWV/convert -s n -v n /tmp/heudiconvdcm3uoGWV/convert/20170612_WCW.MR.COCOAN_FMRI_MBEPI.0023.0001.2017.06.12.16.07.00.956995.128501804.IMA
INFO: Executing node embedder in dir: /tmp/heudiconvdcm3uoGWV/embedder

Any comments and inputs would be appreciated!

from heudiconv.

yarikoptic avatar yarikoptic commented on September 28, 2024

is there a chance to access that dataset, then we could chase all the rabits at once...?

from heudiconv.

wanirepo avatar wanirepo commented on September 28, 2024

@yarikoptic Thanks for your help! I'm sharing a dropbox link to share our data (around 3GB). There is "docker_script.txt", where I stored my terminal commands to run heudiconv in docker. Also it should have my heuristic.py file. If you have any trouble to download data, please let me know.
https://www.dropbox.com/sh/1mmklk0p2k7hypo/AABw2FrtigsyjxB0io7wxxIra?dl=0

Thanks for your time to take a look at this problem!

from heudiconv.

wanirepo avatar wanirepo commented on September 28, 2024

@yarikoptic I'm sorry to ask you this again, but I wonder if you could have a look at our data and try docker version tool to covert it when you have time. We're waiting for your response. Thanks.

from heudiconv.

yarikoptic avatar yarikoptic commented on September 28, 2024

sorry @wanirepo... I think I have downloaded the data already but then got overloaded... will try to get to it in some "leisure" moment in the upcoming days unless someone beats me to it

from heudiconv.

jamesfcavanagh avatar jamesfcavanagh commented on September 28, 2024

Was this issue ever resolved using Docker? I'm having the same problem.

from heudiconv.

mgxd avatar mgxd commented on September 28, 2024

@jamesfcavanagh are you using the latest release on dockerhub? what does your heudiconv command look like? what output are you getting?

from heudiconv.

jamesfcavanagh avatar jamesfcavanagh commented on September 28, 2024

I assume so; I'm using nipy/heudiconv:latest.

Here's the command:

docker run --rm -it -v /Users/lab217imac2/Documents/fMRI:/base nipy/heudiconv:latest -d /base/{subject}///*.dcm -o /base/Nifti/ -f /base/Nifti/code/heuristic.py -c dcm2niix -b -s 86269 --overwrite

I've tried a variety of heuristic commands, they're all the same. Here's the most straightforward:
t1w_high_res = create_key('sub-{subject}/anat/sub-{subject}_acq-highres_T1w')
task1 = create_key('sub-{subject}/func/sub-{subject}_task-doors_run-01_bold')
task2 = create_key('sub-{subject}/func/sub-{subject}_task-doors_run-02_bold')
task3 = create_key('sub-{subject}/func/sub-{subject}_task-doors_run-03_bold')

Final output:

190625-16:22:09,404 nipype.interface INFO:
stdout 2019-06-25T16:22:09.401180:slices stacked despite varying acquisition numbers (if this is not desired recompile with 'mySegmentByAcq')
INFO: stdout 2019-06-25T16:22:09.401180:slices stacked despite varying acquisition numbers (if this is not desired recompile with 'mySegmentByAcq')
190625-16:22:09,406 nipype.interface INFO:
stdout 2019-06-25T16:22:09.401180:Convert 689 DICOM as ./func (82x82x56x689)
INFO: stdout 2019-06-25T16:22:09.401180:Convert 689 DICOM as ./func (82x82x56x689)
190625-16:22:24,270 nipype.interface INFO:
stdout 2019-06-25T16:22:24.269997:Compress: "/usr/bin/pigz" -b 960 -n -f -6 "./func.nii"
INFO: stdout 2019-06-25T16:22:24.269997:Compress: "/usr/bin/pigz" -b 960 -n -f -6 "./func.nii"
190625-16:22:24,270 nipype.interface INFO:
stdout 2019-06-25T16:22:24.269997:Conversion required 25.175791 seconds (2.360479 for core code).
INFO: stdout 2019-06-25T16:22:24.269997:Conversion required 25.175791 seconds (2.360479 for core code).
190625-16:22:24,634 nipype.workflow INFO:
[Node] Finished "convert".
INFO: [Node] Finished "convert".
190625-16:22:28,133 nipype.workflow INFO:
[Node] Setting-up "embedder" in "/tmp/embedmetac28s6t2f/embedder".
INFO: [Node] Setting-up "embedder" in "/tmp/embedmetac28s6t2f/embedder".
190625-16:22:29,126 nipype.workflow INFO:
[Node] Running "embedder" ("nipype.interfaces.utility.wrappers.Function")
INFO: [Node] Running "embedder" ("nipype.interfaces.utility.wrappers.Function")

from heudiconv.

mgxd avatar mgxd commented on September 28, 2024

I think this is the same issue in #288

Basically, what I suspect might be happening is you are hard-coding paths (with specific runs) and then grouping multiple sequences as a single run. You shouldn't need to use the --overwrite flag unless the data already exists before calling heudiconv

# your current approach
task1 = create_key('sub-{subject}/func/sub-{subject}_task-doors_run-01_bold')
task2 = create_key('sub-{subject}/func/sub-{subject}_task-doors_run-02_bold')
task3 = create_key('sub-{subject}/func/sub-{subject}_task-doors_run-03_bold')
...
# a more flexible approach
doors = create_key('sub-{subject}/func/sub-{subject}_task_doors_run-{item:02d}_bold')

If you can share your heuristic + dicominfo.tsv for the problematic subject, we can confirm whether this is in fact the problem

from heudiconv.

mgxd avatar mgxd commented on September 28, 2024

@jamesfcavanagh Awesome! Could you share a snippet of the error message you received when using < 2GB?

from heudiconv.

jamesfcavanagh avatar jamesfcavanagh commented on September 28, 2024

It's there 4 cells above this one (under Final Output:). It just ... ends. No error message.

from heudiconv.

yarikoptic avatar yarikoptic commented on September 28, 2024

If this issue is still pertinent, please reopen - lots of time and releases passed since then

from heudiconv.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.