Giter Site home page Giter Site logo

Comments (5)

kylechampley avatar kylechampley commented on September 14, 2024 1

Garrett,

Tomography is very geometrical, so many things can be explained by visualizing the geometry.

OK, so assume just a 2D CT scan. Then one must collection projections over all directions in order to uniquely reconstruct an point in the object. Now rays that travel in opposite directions (separated by 180 degrees) are measuring the same thing. So this means that one must collect projections over 180 degrees in order to reconstruct. But in the half-fan case, the detector only covers half the object. Now imaging this detector rotating around. You can see that a fixed point in space will only be in the projection half of the time because, again, the detector only covers half the object. Thus one needs a full 360 degrees of projections in order to guarantee that each point in the object is covered by 180 degrees of projections.

Hopefully this makes sense. It may help to draw a picture of what is happening.

from leap.

kylechampley avatar kylechampley commented on September 14, 2024

Hi Garrett,

Thanks for posting this issue!

The script below reconstructs the data using FBP. I did not use the information tagged as "Matrix" in the xml file. I don't quite understand how to interpret it. It could perform a detector rotation which may improve the results. Let me know if you need help with this or whether you are fine with just ignoring it.

import sys
import os
import time
import numpy as np
import matplotlib.pyplot as plt
from leapctype import *
leapct = tomographicModels()

dataPath = r'D:\tomography\CV_P1_T_01\Proj'

import xml.etree.ElementTree as ET
tree = ET.parse(os.path.join(dataPath, 'Geometry.xml'))
root = tree.getroot()
allEntries = root.findall('.//GantryAngle')
numAngles = len(allEntries)
angles = np.array(range(numAngles), dtype=np.float32)
for n in range(numAngles):
    angles[n] = allEntries[n].text
angles = np.unwrap(angles*np.pi/180.0)*180.0/np.pi

sdd = 1500.0
sod = 1000.0
pixelSize = 0.388
numCols = 1024
numRows = 768
centerCol = 0.5*(numCols-1) - 148.0/pixelSize

leapct.set_conebeam(numAngles, numRows, numCols, pixelSize, pixelSize, 0.5*(numRows-1), centerCol, angles, sod, sdd)
leapct.set_offsetScan(True)
leapct.set_truncatedScan(True)
leapct.set_volume(450, 450, 220, 1.0, 1.0)

g = leapct.allocate_projections()
f = leapct.allocate_volume()

files = glob.glob(os.path.join(dataPath, 'Proj_*[0-9].bin'))
for n in range(len(files)):
    anImage = np.reshape(np.fromfile(files[n], dtype=np.float32), (numRows, numCols))
    g[n,:,:] = anImage[:,:]

leapct.FBP(g, f)
leapct.display(f)

from leap.

Gstevenson3 avatar Gstevenson3 commented on September 14, 2024

Thank you Kyle!

I was making several mistakes on the data reading and LEAP side, but have a working single-frame FBP result now.

I'm currently working on integrating your advice from this issue.

This particular dataset has 10 distinct respiratory bins and in-turn 10 GT images to compare against and some bins have less than 360 degrees of data. Hence, the need to integrate your "copy first projection to the end to make 360".
This "10 images from 680 projections" is foreign to me, because (at least in parallel beam world) I'm used to making numImages = numProjections.

So I'd like to leave this issue open for now, in case I can formulate any more advanced follow-up questions!

from leap.

kylechampley avatar kylechampley commented on September 14, 2024

Garrett, I'm afraid I don't understand your comments. There is no relation between the number of projections and the number of slices in the reconstruction. For parallel-beam, the number of slices in the reconstruction does match the number of detector rows. Is that what you meant?

from leap.

Gstevenson3 avatar Gstevenson3 commented on September 14, 2024

Hi Kyle,

Apologies. I'm conflating a few things together into word vomit.

At a high level, I was trying to convey that the set_offsetScan(True) requirement that 360 degrees of angular range are included in every reconstructed frame is new for me. In other non-offset, parallel beam datasets, I've not had this constraint and have gotten used to generating a reconstructed frame for every projection (which is what I meant by numImages = numProjections).

I've gotten that part of my problem working.

I don't have any ongoing issues at the moment, but would like to leave this thread open for a little longer in case any come up.

Thanks again for all your support!

from leap.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.