Giter Site home page Giter Site logo

Comments (33)

Spenhouet avatar Spenhouet commented on May 26, 2024 1

@mstaring @dzenanz A classic happened: For the example sample our code works now, and the solution also makes sense but we did now run into a sample where it does not work.
I'm expecting that you were both right that we do in fact need to also factor in the center of rotation.
But we struggle again to get how.
We already considered the formula you shared and also did more research.
I did again post the full question here: https://stackoverflow.com/questions/71529039/how-factor-the-itk-centerofrotationpoint-in-a-affine-transformation-matrix

Maybe you could have a look if you see an obvious mistake or answer.

from itkelastix.

ntatsisk avatar ntatsisk commented on May 26, 2024 1

Hi @Spenhouet, I undestand that it has been long since you raised this issue and maybe your project has moved forward by now. However, I just created a PR here InsightSoftwareConsortium/itk-torch-transform-bridge#6 that seems to do what you were aiming to do. In particular, the test_regisration() function inside the test_cases.py file is tested (locally at my pc) using the data that you shared here (fixed_2mm.nii.gz, moving_2mm.nii.gz) and does a registration with Elastix and then transforms affinely using MONAI. Hopefully, this is helpful to you.

Some notes:

  • The PR is still draft so likely to change soon
  • I am using ITK to read the images instead of nibabel because I think the problem gets even more complicated otherwise. I don't know if it was a requirement for you to use nibabel in the first place.
  • I had trouble loading real_fixed.nii.gz, real_moving.nii.gz using ITK for some reason, and that's why I used the original images that you shared.

Let me know if you have any comments, or if you happen to test the code and it works for you. Any feedback will help improve the PR as well.

from itkelastix.

dzenanz avatar dzenanz commented on May 26, 2024

I think you are getting the directionality of the transform wrong. The transform from X (fixed) to Y (moving) is to be applied to points/pixels of Y, and it will yield corresponding points/pixels in X. This has confused me more than once.

So what you might want to do is construct the matrix without negating the translation, and then invert it. If you don't invert it, you will have a X<-Y transform. If you invert it, you will have Y<-X transform. Alternatively, you might invert just the rotation part (and that might boil down to a transpose), and negate the translation.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

Thanks for the quick reply and thanks for helping. I know this is not an issue with ITK but just my problem of trying to get this solved. So I appreciate any help.

I am applying the affine transformation matrix to Y (the moving image). Not seeing a mistake there.

But I'm really unsure about what itk returns as transform parameters. From my point of view it is just a list of values which somehow translates into an affine matrix. Is my general assumption about rotation being the first part and translation the last correct?

I did as you suggested:

rotation = transform_parameters[:9].reshape(3, 3)
translation = transform_parameters[-3:][..., np.newaxis]
reg_affine: np.ndarray = np.append(rotation, translation, axis=1)
reg_affine = np.append(reg_affine, [[0,0,0,1]], axis=0)
reg_affine = np.linalg.inv(reg_affine)

But this still does not result in the correct output.

There is also the CenterOfRotationPoint, do I need that? At least ITKs AffineTransform seems to want it as parameter.

I'm working on a full minimal example. Will share shortly.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

Here my minimal example code with input and output data:

import nibabel
from nibabel import Nifti1Image
import numpy as np
from monai.transforms import Affine
import itk

# Import Images
moving_image = itk.imread('moving_2mm.nii.gz', itk.F)
fixed_image = itk.imread('fixed_2mm.nii.gz', itk.F)

# Import Default Parameter Map
parameter_object = itk.ParameterObject.New()
affine_parameter_map = parameter_object.GetDefaultParameterMap('affine', 4)
affine_parameter_map['FinalBSplineInterpolationOrder'] = ['1']
parameter_object.AddParameterMap(affine_parameter_map)

# Call registration function
result_image, result_transform_parameters = itk.elastix_registration_method(
    fixed_image, moving_image, parameter_object=parameter_object)
parameter_map = result_transform_parameters.GetParameterMap(0)
transform_parameters = np.array(parameter_map['TransformParameters'], dtype=float)

itk.imwrite(result_image, 'reg_itk.nii.gz', compression=True)

# Convert transform params to affine transform matrix
rotation = transform_parameters[:9].reshape(3, 3)
translation = transform_parameters[-3:][..., np.newaxis]
reg_affine: np.ndarray = np.append(rotation, translation, axis=1)  # type: ignore
reg_affine = np.append(reg_affine, [[0, 0, 0, 1]], axis=0)  # type: ignore
reg_affine = np.linalg.inv(reg_affine)

# Apply affine transform matrix via MONAI
moving_image_ni: Nifti1Image = nibabel.load('moving_2mm.nii.gz')
fixed_image_ni: Nifti1Image = nibabel.load('fixed_2mm.nii.gz')
moving_image_np: np.ndarray = moving_image_ni.get_fdata()  # type: ignore

affine_transform = Affine(affine=reg_affine, image_only=True)
reg_monai = np.squeeze(affine_transform(moving_image_np[np.newaxis, ...]))

out = Nifti1Image(reg_monai, fixed_image_ni.affine, header=fixed_image_ni.header)

nibabel.save(out, 'reg_monai.nii.gz')

Input data:
fixed_2mm.nii.gz
moving_2mm.nii.gz

Output data:
reg_itk.nii.gz
reg_monai.nii.gz

Package versions:

itk-elastix==0.12.0
monai==0.8.0
nibabel==3.1.1
numpy==1.19.2

from itkelastix.

dzenanz avatar dzenanz commented on May 26, 2024

This is what ITK returns as parameters:
https://github.com/InsightSoftwareConsortium/ITK/blob/v5.3rc03/Modules/Core/Transform/include/itkMatrixOffsetTransformBase.hxx#L488-L511
Here is what elastix docs say:
https://github.com/SuperElastix/elastix/blob/last-itk5.2-support/Components/Transforms/AdvancedAffineTransform/elxAdvancedAffineTransform.h#L35-L36

from itkelastix.

dzenanz avatar dzenanz commented on May 26, 2024

Yes, fixed parameters (usually center of rotation) matter. They are usually initialized to all zeroes, but that might not be true in your case. Also, elastix transforms are not the same as ITK transforms and differences are possible.

from itkelastix.

dzenanz avatar dzenanz commented on May 26, 2024

nibabel library might be using RAS+ coordinate system by default. ITK uses LPS+. That is another possible source of discrepancy.

from itkelastix.

mstaring avatar mstaring commented on May 26, 2024

Yes, you need to consider the center of rotation. For affine it reads:
T(x) = A ( x - c ) + (t + c)
with the first 9 parameters filling A in row-major order, the last 3 filling t, and the fixed parameters filling c.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

Thanks for the code links. The ITK and Elastic docs are rather sparse. The only thing I got from them is that my assumption about the first 9 and last 3 numbers was right.

I don't think it has something to do with orientation. The output is pretty close and not mirrored. I also tried changing the orientation of the image to LPS but this actually resulted in a mirrored output.

@mstaring Thanks for the math on the center of rotation. I only want to define the full affine for the transformation without applying it. Can I still apply the center of rotation?

Also in what space are the center of rotation coordinates? In the voxel space of fixed or moving or in the world space?

For the above example images and code, ITK returns this center of rotation:

[  69.46565247,   66.69592285, -109.80116272]

Using this as correction factor on the translation leads to very wrong translations. So applying it does not seem right.

The images have a voxel shape of 128 x 128 x 128.
From intuition the center refers to the moving image.
If I project the center into the world space using the affine of the moving image

center = np.array(parameter_map['CenterOfRotationPoint'], dtype=float)
points_ = np.insert(center, center.shape[-1], 1, axis=-1)
center_projected = (points_ @ moving_image_ni.affine.T)[:-1]

I get these coordinates:

array([ -63.5       ,  -63.5       , -283.10232544])

But again, the value range makes me doubt, that I need to calculate these in.

Maybe some interesting tidbit. The moving image was actually produced using this custom affine transformation:

array([[ 0.99500417, -0.09983342,  0.1       ,  0.5       ],
       [-0.09983342,  0.99500417,  0.1       , -0.5       ],
       [ 0.1       ,  0.1       ,  1.        ,  0.2       ],
       [ 0.        ,  0.        ,  0.        ,  1.        ]])

So the true transformation affine the registration should find is the inverse of that, which is:

array([[ 1.02800583,  0.11462834, -0.11426342, -0.43383606],
       [ 0.11462834,  1.02800583, -0.11426342,  0.47954143],
       [-0.11426342, -0.11426342,  1.02285268, -0.20457054],
       [ 0.        ,  0.        ,  0.        ,  1.        ]])

The code from my minimal example (including the inverse) yields this transformation affine:

array([[ 0.99538262, -0.09987735, -0.10029317, -0.25466545],
       [-0.09944811,  0.99546527, -0.09998216,  0.22553216],
       [-0.0998615 , -0.09995486,  1.00028445,  0.0509873 ],
       [ 0.        ,  0.        ,  0.        ,  1.        ]])

without the inverse:

array([[ 1.02757335,  0.11459412,  0.11448339,  0.23000557],
       [ 0.11410441,  1.02746452,  0.11413955, -0.20848751],
       [ 0.11398788,  0.11411115,  1.02255042, -0.04884404],
       [ 0.        ,  0.        ,  0.        ,  1.        ]])

So as you can see, these are already close to the expected affine but both are slightly off and have partially wrong signs.
You can also see that from the value range, applying the center of rotation will lead to a totally different affine.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

I'm out of ideas.

If anyone knows how to make my minimal example code work with the provided data that would be great.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

Maybe it is an orientation issue after all.

If I take my transformation affine without the inverse, and manually switch all signs according to the "true" transform affine, then the results match the results of the ITK registration output.

Currently looking into how I can switch these signs based on the LPS vs. RAS difference directly on the transformation affine matrix.
Will write if I find something.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

Short update here. Still no luck.

Nibabel has that concept of "ornt" / orientation, from which I can get the flip values for the translation values:

LPS = nibabel.orientations.axcodes2ornt(('L', 'P', 'S'))
RAS = nibabel.orientations.axcodes2ornt(('R', 'A', 'S'))
ornt_trans = nibabel.orientations.ornt_transform(LPS, RAS)

Where this outputs:

array([[ 0., -1.],
       [ 1., -1.],
       [ 2.,  1.]])

If I would multiply the [-1, -1, 1] with the translation array, this would yield the correct signs but this does not address the rotation matrix. I don't know how to address that.

In the MONAI ITK loader I found this code snipped:

flip_diag = [[-1, 1], [-1, -1, 1], [-1, -1, 1, 1]][sr - 1]  # itk to nibabel affine
affine = np.diag(flip_diag) @ affine

where [-1, -1, 1, 1] matches again the [-1, -1, 1] with an additional 1 for the affine computation.

I already thought I found the solution but when I perform the computation

np.diag([-1, -1, 1, 1]) @ reg_affine

the signs of the rotation matrix do not match at all:

array([[-1.02757335, -0.11459412, -0.11448339, -0.23000557],
       [-0.11410441, -1.02746452, -0.11413955,  0.20848751],
       [ 0.11398788,  0.11411115,  1.02255042, -0.04884404],
       [ 0.        ,  0.        ,  0.        ,  1.        ]])

from itkelastix.

dzenanz avatar dzenanz commented on May 26, 2024

ITK's center of rotation should be in physical (world) coordinates. This index to physical discussion might be useful to you.

Here is TorchIO code for transforming between 4x4 matrix and SimpleITK image. It handles LPS/RAS conversion.

Finally, you might need to pay attention to row-major vs column-major, AKA row-vectors vs column vectors layout of the matrices.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

@dzenanz I still don't get why the center of rotation should be important for me. The affine transformation does not really care where the origin or center is if I directly apply it to the data matrix. So I'm still not sure it has any relevance for what I'm trying to do. Not sure if it is clear what I'm trying to do?

With respect to row-major vs. column-major. I thought the rotation matrix bit is row-major. That is what the C++ code looked like, that is what the docs say and what was said here. So my reshaping should be correct (row-major).

Thanks for the thread on index to physical but I missed how that helps in my case.

Thanks for the link to TorchIOs code. What you linked there is basically what I'm already doing. See my code above. But it is not working.

Maybe to clarify that I'm working on the transformation affine and not the image affine.

I'm really stuck here. I feel I'm sooo close to the solution but don't quite get it. Missing something.
I already put in a lot of time trying to solve this.
Did open a stackoverflow question on this now since I really need a solution: https://stackoverflow.com/questions/71441883/how-to-get-transformation-affine-from-itk-registration

Maybe someone is willing to execute my code with my files and see what the issue is.

from itkelastix.

dzenanz avatar dzenanz commented on May 26, 2024

4x4 matrix implicitly rotates around origin (0,0,0). If the transform you are trying to convert into 4x4 matrix has non-zero rotation center, you need to take it into account. I think Marius gave the formula.

Your numbers and type of problem looks very similar to mine. No way around debugging it. I wish there was somebody to debug it for me when I hit a block. Best way: let it rest for a few days, then come back to it with fresh mind.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

I think Marius gave the formula.

The formula was to apply the transformation, but I don't want to do that. I just want to adjust the transformation matrix itself. And again, adding the center results in a completely different (unusable) affine.

No way around debugging it.

Not sure what there is to debug? I'm expecting I'm missing code I don't know about. Can't debug non existing code. That's why I'm asking in the first place because I don't know what is missing.
I'm already on this for 2 days. I'm not getting anywhere except if someone actually looks at the code I shared.
I'm not asking for someone to "debug" my code. Usually it is highly welcomed if one shares a minimal working example which can just be executed.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

We got it solved. I explained the whole background and solution here: https://stackoverflow.com/a/71485934/2230045

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

I would like to reopen this issue. I really struggle to get this working. I totally lost track of what makes sense and what does not.
Starting to doubt anything even that ITK is just buggy - but more likely I'm making a mistake. But ITK not explaining what it is doing and what the numbers it spits out actually are and contain is probably the culprit to why I can not figure this out.

I want to be frank, the replies I got here feel dismissive. Mostly it seems that my posts were not actually read. Especially my code was completely ignored. I'm now sitting close to two weeks on this. I hope at some point someone is willing to actually read my problem and look into it.

Shortly reiterating.

FLIPXY_44 = np.diag([-1, -1, 1, 1])

parameter_map = result_transform_parameters.GetParameterMap(0)

rot00, rot01, rot02, rot10, rot11, rot12, rot20, rot21, rot22 = parameter_map['TransformParameters'][:9]
A = np.array([
    [rot00, rot01, rot02, 0],
    [rot10, rot11, rot12, 0],
    [rot20, rot21, rot22, 0],
    [    0,     0,     0, 1],
], dtype=float)  # yapf: disable

tx, ty, tz = parameter_map['TransformParameters'][9:]
t = np.array([
    [1, 0, 0, tx],
    [0, 1, 0, ty],
    [0, 0, 1, tz],
    [0, 0, 0,  1],
], dtype=float)  # yapf: disable

# In world coordinates
cx, cy, cz = parameter_map['CenterOfRotationPoint']
c = np.array([
    [1, 0, 0, cx],
    [0, 1, 0, cy],
    [0, 0, 1, cz],
    [0, 0, 0,  1],
], dtype=float)  # yapf: disable

moving_ras = moving_image.affine

It was stated that the correct formula to use the ITK registration parameters is:

T(x) = A ( x - c ) + (t + c)

Given everything is in world space and in LPS orientation, this looks to be equivalent to this calculation:

Tx = c @ t @ A @ np.linalg.inv(c)

Given that the images I want to apply the registration to are in voxel space and in RAS orientation, I assume I have to first move it into world space, flip it from RAS to LPS and after the transform the same in reverse.

reg = np.linalg.inv(moving_ras) @ FLIPXY_44 @ c @ t @ A @ np.linalg.inv(c) @ FLIPXY_44 @ moving_ras

But this does not work at all.... so somewhere there needs to be a logical error.

I can share more data and any information. Would really appreciate if someone could look into it.

from itkelastix.

dzenanz avatar dzenanz commented on May 26, 2024

my code was completely ignored

This is probably true. I recently spent half a day figuring out a similar problem, and I therefore haven't even attempted to solve yours. I provided some advice. But Kitware does offer paid support, in case you need it.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

I noticed that my current minimal code example does not quite comprehensive. Therefore here an update.

from pathlib import Path

import nibabel
import numpy as np
from monai.transforms.spatial.array import Affine
from monai.utils.enums import GridSampleMode, GridSamplePadMode
from nibabel import Nifti1Image

np.set_printoptions(suppress=True)  # type: ignore

folder = Path('.')

FLIPXY_44 = np.diag([-1, -1, 1, 1])

# rot00, rot01, rot02, rot10, rot11, rot12, rot20, rot21, rot22 = parameter_map['TransformParameters'][:9]
A = np.array([[ 1.02380734, -0.05137566, -0.00766465,  0.        ],
              [ 0.01916231,  0.93276486, -0.23453097,  0.        ],
              [ 0.01808809,  0.2667324 ,  0.94271694,  0.        ],
              [ 0.        ,  0.        ,  0.        ,  1.        ]]) # yapf: disable

# tx, ty, tz = parameter_map['TransformParameters'][9:]
t = np.array([[ 1.        ,  0.        ,  0.        ,  1.12915465  ],
              [ 0.        ,  1.        ,  0.        , 11.76880151  ],
              [ 0.        ,  0.        ,  1.        , 41.54685788  ],
              [ 0.        ,  0.        ,  0.        ,  1.          ]]) # yapf: disable

# cx, cy, cz = parameter_map['CenterOfRotationPoint']
c = np.array([[ 1.        ,  0.        ,  0.        ,  -0.1015625  ],
              [ 0.        ,  1.        ,  0.        , -24.5521698  ],
              [ 0.        ,  0.        ,  1.        ,   0.1015625  ],
              [ 0.        ,  0.        ,  0.        ,   1.         ]]) # yapf: disable

# Moving image affine
x = np.array([[ 2.        ,  0.        ,  0.        , -125.75732422],
              [ 0.        ,  2.        ,  0.        , -125.23828888],
              [ 0.        ,  0.        ,  2.        ,  -99.86506653],
              [ 0.        ,  0.        ,  0.        ,    1.        ]]) # yapf: disable

moving_ras = x

# Combine the direction and translation
transform = t @ A

# Factor in the center of rotation
# transform = c @ transform @ np.linalg.inv(c)

# Switch from LPS to RAS orientation
registration = FLIPXY_44 @ transform @ FLIPXY_44

y = np.array([[ 2.        ,  0.        ,  0.        , -126.8984375 ],
              [ 0.        ,  2.        ,  0.        , -102.4478302 ],
              [ 0.        ,  0.        ,  2.        , -126.8984375 ],
              [ 0.        ,  0.        ,  0.        ,    1.        ]]) # yapf: disable

fixed_image_affine = y

moving_image_ni: Nifti1Image = nibabel.load(folder / 'real_moving.nii.gz')
moving_image_np: np.ndarray = moving_image_ni.get_fdata()  # type: ignore

affine_transform = Affine(affine=registration,
                          image_only=True,
                          mode=GridSampleMode.NEAREST,
                          padding_mode=GridSamplePadMode.BORDER)
out_img = affine_transform(moving_image_np[np.newaxis, ...])
reg_monai = np.squeeze(out_img)

out = Nifti1Image(reg_monai, fixed_image_affine)

nibabel.save(out, folder / 'reg_monai.nii.gz')

Here with new test data:
real_fixed.nii.gz
real_moving.nii.gz

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

I also extended on the SO question with example images and a list of potential issues: https://stackoverflow.com/questions/71529039/how-to-factor-the-itk-centerofrotationpoint-in-a-affine-transformation-matrix

from itkelastix.

dzenanz avatar dzenanz commented on May 26, 2024

This might be useful: Registration: Spatial Image Definitions.

from itkelastix.

Yggdrasil1 avatar Yggdrasil1 commented on May 26, 2024

I also had an issue where the transformation matrix was off just by a few degree and voxels. For me it was because of the order of action.
If you use homogeneous matrices you need to shift your volume so that the center of rotation is at (0,0,0,1), then apply the rotation, then shift your volume back to your original center of rotation and then apply the transformation from elastix.

for a rigid transformation the correct order is:

complete_matrix = ext_trans @ back_trans_matrix @ rot_matrix @ trans_matrix

where trans_matrix is the homogeneous matrix for (CenterOfRotation * -1) and the back_trans_matrix the "backmovement" of this.
Maybe my .py file handling this for me is helpful for you:

I save the transformation in a file, read the ParameterMap0.txt in and handle everything from this file.

Matrix_operations.zip

from itkelastix.

thewtex avatar thewtex commented on May 26, 2024

A follow-up pointer: we now have a tutorial on metadata preservation that includes information or anatomical orientation, ITK-NiBabel bridges.

from itkelastix.

thewtex avatar thewtex commented on May 26, 2024

Also how to properly handle spatial transformations, which is a common source of errors.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

Hi @ntatsisk, nice to hear something from this issue. We did not move on and rather were stuck with what we wanted to do. We did perform some workarounds which we are not happy about. So finding a real solution to this would still be very good.

Yes, loading images with nibabel is a requirement for us. What open issues are you experiencing when loading via nibabel?

At the moment we are not actively working on this and I would need some free time to look into your PR (which right now I don't have).
Right now, I can not really promise anything time wise.
But we still have our need so we will come back to this topic eventually. Potentially pretty soon.

from itkelastix.

ntatsisk avatar ntatsisk commented on May 26, 2024

Thanks for the update. I haven't used nibabel at all, and hence I haven't tried to load your images with it. In addtion, Elastix (and ITKElastix) are based on ITK so it made more sense for the PR I submitted to stick to ITK reader and consequently the LPS system. Not sure how helpful I can be for the nibabel/RAS part.

Anyway, one step at a time. Feel free to come back to this when/if you find the time, and we see from there.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

@ntatsisk I'm now back on the topic and tried your implementation.

I was not able to get it to work. I could also not find the test_regisration() test case you mentioned. EDIT: I found it in the commit history but stored result did not match for me.

A short remark on nibabel vs. monai. Monai and nibabel use the same data structure etc. If it works for monai, it will work for nibabel and vice versa. Neither nibabel nor monai use the LPS system. Both are RAS based per default.

I did stick to the example images shared earlier but could not reproduce that this is working now.

Here my code using itk (itk-elastix) + monai + nibabel. Full code of what we need to work.

# %%
from pathlib import Path

import itk
from monai.data.meta_tensor import MetaTensor
from monai.transforms.spatial.array import Affine
from monai.utils.enums import GridSampleMode
from monai.utils.enums import GridSamplePadMode
import nibabel
from nibabel import Nifti1Image
import numpy as np
import torch
from utils.itk_torch_affine_matrix_bridge import itk_to_monai_affine

folder = Path('data')
moving_path = folder / 'real_moving.nii.gz'
fixed_path = folder / 'real_fixed.nii.gz'

# %%
# Perform registration with itk

# Load images with itk
moving_image = itk.imread(str(moving_path), itk.F)
fixed_image = itk.imread(str(fixed_path), itk.F)

# Import Default Parameter Map
parameter_object = itk.ParameterObject.New()  # type: ignore
affine_parameter_map = parameter_object.GetDefaultParameterMap('affine', 4)
affine_parameter_map['FinalBSplineInterpolationOrder'] = ['1']
parameter_object.AddParameterMap(affine_parameter_map)

# Call registration function
result_image, result_transform_parameters = itk.elastix_registration_method(  # type: ignore
    fixed_image, moving_image, parameter_object=parameter_object)

itk.imwrite(result_image, str(folder / 'real_itk_reg_itk.nii.gz'), compression=True)

# %%
# Apply affine transform matrix via MONAI Bridge

parameter_map = result_transform_parameters.GetParameterMap(0)
transform_parameters = parameter_map['TransformParameters']
# dir00, dir01, dir02, dir10, dir11, dir12, dir20, dir21, dir22
direction = np.array(transform_parameters[:9], dtype=float).reshape((3, 3))
# tx, ty, tz
translation = np.array(transform_parameters[9:], dtype=float).tolist()

center_of_rotation = np.array(parameter_map['CenterOfRotationPoint'], dtype=float).tolist()

monai_affine_transform = itk_to_monai_affine(
    image=moving_image,
    matrix=direction,
    translation=translation,
    center_of_rotation=center_of_rotation,
)

# %%
# Apply affine transform with MONAI

# Load images with Nibabel (assume RAS)
moving_image_ni: Nifti1Image = nibabel.load(moving_path)
fixed_image_ni: Nifti1Image = nibabel.load(fixed_path)
moving_image_tensor: torch.Tensor = torch.as_tensor(moving_image_ni.get_fdata(),
                                                    dtype=torch.float64)
moving_image_affine: torch.Tensor = torch.as_tensor(moving_image_ni.affine, dtype=torch.float64)

# Construct MetaTensor
moving_image_mt = MetaTensor(moving_image_tensor, affine=moving_image_affine)

# Define Affine transform
affine_transform = Affine(affine=monai_affine_transform,
                          image_only=True,
                          mode=GridSampleMode.BILINEAR,
                          padding_mode=GridSamplePadMode.BORDER)

# Add batch dim and perform affine transform
out_img: torch.Tensor = affine_transform(moving_image_mt[np.newaxis, ...])  # type: ignore
# Remove batch dim
reg_monai = np.squeeze(out_img.numpy())

out = Nifti1Image(reg_monai, fixed_image_ni.affine, header=fixed_image_ni.header)

nibabel.save(out, folder / 'real_reg_monai.nii.gz')

Installed package versions:

itk-elastix==0.15.0
monai==1.1.0
nibabel==5.0.0
numpy==1.24.1
torch==1.8.1

Output of the itk registration real_itk_reg_itk.nii.gz looks good (green = fixed image, blue = itk registered image):

image

Output of the monai affine transform real_reg_monai.nii.gz using the itk transform affine matrix converted using your implementation (green = fixed image, red = monai affine transformed image):

image

What am I doing wrong? Am I properly using your implementation?

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

As mentioned in my edit above, I did find your test_regisration() test case implementation but could also not make it work.

This is the adapted code:

# %%
from pathlib import Path

import itk
import nibabel
from nibabel import Nifti1Image
import numpy as np
from utils.itk_torch_affine_matrix_bridge import image_to_metatensor
from utils.itk_torch_affine_matrix_bridge import itk_to_monai_affine
from utils.itk_torch_affine_matrix_bridge import monai_affine_resample
from utils.itk_torch_affine_matrix_bridge import remove_border

folder = Path('data')
moving_path = folder / 'real_moving.nii.gz'
fixed_path = folder / 'real_fixed.nii.gz'
ndim = 3

# %%
# Perform registration with itk

# Load images with itk
moving_image = itk.imread(str(moving_path), itk.F)
fixed_image = itk.imread(str(fixed_path), itk.F)

# MONAI seems to have different interpolation behavior at the borders, and
# no option matches exactly ITK/Elastix. Hence, we pad to allow for direct
# numerical comparison later at the outputs.
fixed_image[:] = remove_border(fixed_image)
moving_image[:] = remove_border(moving_image)

# Import Default Parameter Map
parameter_object = itk.ParameterObject.New()  # type: ignore
affine_parameter_map = parameter_object.GetDefaultParameterMap('affine', 4)
affine_parameter_map['ResampleInterpolator'] = ['FinalLinearInterpolator']
parameter_object.AddParameterMap(affine_parameter_map)

# Call registration function
result_image, result_transform_parameters = itk.elastix_registration_method(  # type: ignore
    fixed_image, moving_image, parameter_object=parameter_object)

itk.imwrite(result_image, str(folder / 'real_itk_reg_itk.nii.gz'), compression=True)

# %%
# Apply affine transform matrix via MONAI Bridge

parameter_map = result_transform_parameters.GetParameterMap(0)
transform_parameters = parameter_map['TransformParameters']
# dir00, dir01, dir02, dir10, dir11, dir12, dir20, dir21, dir22
direction = np.array(transform_parameters[:ndim * ndim], dtype=float).reshape((ndim, ndim))
# tx, ty, tz
translation = np.array(transform_parameters[-ndim:], dtype=float).tolist()

center_of_rotation = np.array(parameter_map['CenterOfRotationPoint'], dtype=float).tolist()

monai_affine_transform = itk_to_monai_affine(
    image=moving_image,
    matrix=direction,
    translation=translation,
    center_of_rotation=center_of_rotation,
)

# %%
# Apply affine transform with MONAI

# Load images with Nibabel (assume RAS)
fixed_image_ni: Nifti1Image = nibabel.load(fixed_path)

moving_image_mt = image_to_metatensor(moving_image)

reg_monai = monai_affine_resample(moving_image_mt, affine_matrix=monai_affine_transform)

out = Nifti1Image(reg_monai, fixed_image_ni.affine, header=fixed_image_ni.header)

nibabel.save(out, folder / 'real_reg_monai.nii.gz')

The output is (green = fixed image, red = monai affine transformed image using your test case implementation):
image

I must be missing something.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

@ntatsisk To avoid any mistake on my part I completely 1:1 took your code from this commit hash: https://github.com/InsightSoftwareConsortium/itk-torch-transform-bridge/tree/affa7b375ad885f3489fdb12bec624b064dc8241/src

But the test case output is:

TEST: registration with direction different than identity
ITK equals result: False
MONAI equals result: False
MONAI equals ITK: True

I used the same files as I previously shared and you said you tested the method with.
real_fixed.nii.gz
real_moving.nii.gz

I also checked out your initial commit state: https://github.com/InsightSoftwareConsortium/itk-torch-transform-bridge/tree/414f85487b71a5bf066dbf54ac9939b08fe7ae46/src

But same output. In that code you also stored the itk transformed image transformed_itk.nii.gz but it's not even showing up in the same space as the original image.

I'm not sure what to make of this.
Could you please help explain this.

from itkelastix.

ntatsisk avatar ntatsisk commented on May 26, 2024

As I mentioned in InsightSoftwareConsortium/itk-torch-transform-bridge#6 (comment), let's continue the discussion in this issue here which is ITKElastix focused. I will try to debug it in the coming week.

from itkelastix.

Spenhouet avatar Spenhouet commented on May 26, 2024

I might have found all issues. I wrote them down in this comment: InsightSoftwareConsortium/itk-torch-transform-bridge#6 (comment)

from itkelastix.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.