Giter Site home page Giter Site logo

tomopyui's Introduction

tomopyui's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tomopyui's Issues

check that theta is not nan before passing to tomopy recon

In backend.util.center.write_center, need check to ensure that input theta (from ProjectionsBase.angles_rad, which defaults to None) is not None. Otherwise line 32 -- theta = dtype.as_float32(theta) -- will convert theta to "nan", which tomopy recon accepts and produces garbage images, like this

Screen Shot 2022-02-04 at 12 05 00 PM

No such file or directory with TIF import

When importing a TIF stack as prenormalized data, tomopyui reads the data correctly and shows all angles in the import tab, but clicking the "copy data from imported projections" button in the prep tab or attempting to calculate the center of rotation gives this error that there is no file. I've tried trimming .xrm.bim.binprj out of the file names before .tif but it doesn't fix the error. The path in the error code is C:\Users\dpenn\tomopyui\*.tif but the files are stored in C:\Users\dpenn\Documents\Chemistry\tolbert\alloy_anodes\TXM\20220303_Pennington\BulkSbSn\BulkSbSn2sec1tile_TOMO_220304_051839\RefCorr\croped\BC_croped

Copying the BC_croped folder to the tomopyui directory doesn't work either. Tomopyui writes a folder in the BC_croped directory with two files, import_metadata.json and normalized_projections.hdf5

`

UnknownFormatError Traceback (most recent call last)
File ~\tomopyui\tomopyui\widgets\center.py:310, in Center.find_center_manual_on_click(self, change)
302 cen_range = [
303 self.center_guess - self.search_range,
304 self.center_guess + self.search_range,
305 self.search_step,
306 ]
308 # reconstruct, but also pull the centers used out to map to center
309 # textbox
--> 310 self.rec, self.cen_range = write_center(
311 prj_imgs,
312 angles_rad,
313 cen_range=cen_range,
314 ind=self.index_to_try,
315 mask=True,
316 algorithm=self.algorithm,
317 filter_name=self.filter,
318 num_iter=self.num_iter,
319 )
320 if self.rec is None:
321 self.find_center_manual_button.button_style = "warning"

File ~\tomopyui\tomopyui\backend\util\center.py:33, in write_center(tomo, theta, cen_range, ind, num_iter, mask, ratio, algorithm, sinogram_order, filter_name)
30 if theta is None:
31 return None, cen_range
---> 33 tomo = dtype.as_float32(tomo)
34 theta = dtype.as_float32(theta)
36 dt, dy, dx = tomo.shape

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\tomopy\util\dtype.py:95, in as_float32(arr)
94 def as_float32(arr):
---> 95 arr = as_ndarray(arr, np.float32)
96 return as_dtype(arr, np.float32)

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\tomopy\util\dtype.py:84, in as_ndarray(arr, dtype, copy)
82 def as_ndarray(arr, dtype=None, copy=False):
83 if not isinstance(arr, np.ndarray):
---> 84 arr = np.array(arr, dtype=dtype, copy=copy)
85 return arr

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\array\core.py:1630, in Array.array(self, dtype, **kwargs)
1629 def array(self, dtype=None, **kwargs):
-> 1630 x = self.compute()
1631 if dtype and x.dtype != dtype:
1632 x = x.astype(dtype)

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\base.py:292, in DaskMethodsMixin.compute(self, **kwargs)
268 def compute(self, **kwargs):
269 """Compute this dask collection
270
271 This turns a lazy Dask collection into its in-memory equivalent.
(...)
290 dask.base.compute
291 """
--> 292 (result,) = compute(self, traverse=False, **kwargs)
293 return result

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\base.py:575, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
572 keys.append(x.dask_keys())
573 postcomputes.append(x.dask_postcompute())
--> 575 results = schedule(dsk, keys, **kwargs)
576 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\threaded.py:81, in get(dsk, result, cache, num_workers, pool, **kwargs)
78 elif isinstance(pool, multiprocessing.pool.Pool):
79 pool = MultiprocessingPoolExecutor(pool)
---> 81 results = get_async(
82 pool.submit,
83 pool._max_workers,
84 dsk,
85 result,
86 cache=cache,
87 get_id=_thread_get_id,
88 pack_exception=pack_exception,
89 **kwargs,
90 )
92 # Cleanup pools associated to dead threads
93 with pools_lock:

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\local.py:508, in get_async(submit, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, chunksize, **kwargs)
506 _execute_task(task, data) # Re-execute locally
507 else:
--> 508 raise_exception(exc, tb)
509 res, worker_id = loads(res_info)
510 state["cache"][key] = res

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\local.py:316, in reraise(exc, tb)
314 if exc.traceback is not tb:
315 raise exc.with_traceback(tb)
--> 316 raise exc

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\local.py:221, in execute_task(key, task_info, dumps, loads, get_id, pack_exception)
219 try:
220 task, data = loads(task_info)
--> 221 result = _execute_task(task, data)
222 id = get_id()
223 result = dumps((result, id))

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\core.py:119, in _execute_task(arg, cache, dsk)
115 func, args = arg[0], arg[1:]
116 # Note: Don't assign the subtask results to a variable. numpy detects
117 # temporaries by their reference count and can execute certain
118 # operations in-place.
--> 119 return func(*(_execute_task(a, cache) for a in args))
120 elif not ishashable(arg):
121 return arg

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\optimization.py:990, in SubgraphCallable.call(self, *args)
988 if not len(args) == len(self.inkeys):
989 raise ValueError("Expected %d args, got %d" % (len(self.inkeys), len(args)))
--> 990 return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args)))

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\core.py:149, in get(dsk, out, cache)
147 for key in toposort(dsk):
148 task = dsk[key]
--> 149 result = _execute_task(task, cache)
150 cache[key] = result
151 result = _execute_task(out, cache)

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\core.py:119, in _execute_task(arg, cache, dsk)
115 func, args = arg[0], arg[1:]
116 # Note: Don't assign the subtask results to a variable. numpy detects
117 # temporaries by their reference count and can execute certain
118 # operations in-place.
--> 119 return func(*(_execute_task(a, cache) for a in args))
120 elif not ishashable(arg):
121 return arg

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\utils.py:39, in apply(func, args, kwargs)
37 def apply(func, args, kwargs=None):
38 if kwargs:
---> 39 return func(*args, **kwargs)
40 else:
41 return func(*args)

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask\array\core.py:513, in _pass_extra_kwargs(func, keys, *args, **kwargs)
506 """Helper for :func:dask.array.map_blocks to pass block_info or block_id.
507
508 For each element of keys, a corresponding element of args is changed
509 to a keyword argument with that key, before all arguments re passed on
510 to func.
511 """
512 kwargs.update(zip(keys, args))
--> 513 return func(*args[len(keys) :], **kwargs)

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask_image\imread_init_.py:99, in _map_read_frame(x, multiple_files, block_info, **kwargs)
96 else:
97 i, j = block_info[None]['array-location'][0]
---> 99 return _utils._read_frame(fn=fn, i=slice(i, j), **kwargs)

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\dask_image\imread_utils.py:7, in _read_frame(fn, i, arrayfunc)
6 def _read_frame(fn, i, *, arrayfunc=np.asanyarray):
----> 7 with pims.open(fn) as imgs:
8 return arrayfunc(imgs[i])

File ~\anaconda3\envs\tomopyui-doran\lib\site-packages\pims\api.py:209, in open(sequence, **kwargs)
207 warn(message)
208 exceptions += message + '\n'
--> 209 raise UnknownFormatError("All handlers returned exceptions:\n" + exceptions)

UnknownFormatError: All handlers returned exceptions:
<class 'pims.tiff_stack.TiffStack_tifffile'> errored: [Errno 2] No such file or directory: 'C:\Users\dpenn\tomopyui\crop_tomo_BulkSbSn2sec1tile_idx00001_-089.00deg.xrm.bim.binprj.tif'
<class 'pims.imageio_reader.ImageIOReader'> errored: The ImageIOReader requires imageio and (for imageio >= 2.5) imageio-ffmpeg to work.
<class 'pims.bioformats.BioformatsReader'> errored: The file "crop_tomo_BulkSbSn2sec1tile_idx00001_-089.00deg.xrm.bim.binprj.tif" does not exist.
`

Manually updating quick path search bar throws error

When updating the quick search bar manually, every change to the text triggers "update_filechooser_from_quicksearch", which will throw error if directory doesn't exist. Makes it impossible to manually type in path. Ideally, there should be different behavior if quick search bar is changed manually vs via filechooser. If former, should catch bad filepaths instead of throwing error, and also self.filepath should probably be reset to None.

Screen Shot 2022-02-03 at 2 16 32 PM

[feature idea] scale bar

Add a scale bar to the images.

This would be another bqplot ImageGL mark. Would have to be generated with prior knowledge of pixel scale/binning of image.

For ex. if pixel width is 20 nm and binning is 2, each pixel would be 40 nm. This would come from image metadata and/or scan parameters.

[improvement] Streaks in projections during alignment

During alignment, streaks appear in the dataset. This could be due to 1 of 2 things:

  1. Downsampling the data introduces the streaks because of the padding.
  • solution: pad after downsampling the data. Would require downsampling the padding.
  1. Shifting the data introduces these streaks.
  • unsure of solution here. maybe use filter?

streaking could cause alignment to not work effectively - phase correlation could capture these streaks instead of the features.

unable to import .xrm

This is the same dataset as the other one giving me issues. When trying to import .xrm instead of .tif, I get an error "IndexError: List index is out of range". The files are named as follows in the directory C:\Users\dpenn\Documents\Chemistry\tolbert\alloy_anodes\TXM\20220303_Pennington\BulkSbSn\BulkSbSn2sec1tile_TOMO_220304_051839

ScanInfo_BulkSbSn2sec1tile_220304_051839.txt
ref_BulkSbSn2sec1tile_idx00000_+000.00deg_img001of010.xrm which goes to 10
tomo_BulkSbSn2sec1tile_idx00000_-090.00deg.xrm which goes from index 0 to 179 and angle -90 to +89

here is the error message

`

IndexError Traceback (most recent call last)
File ~\tomopyui\tomopyui\widgets\helpers.py:336, in ImportButton.run_callback(self, *args)
334 self.button.icon = self.icon_during
335 self.button.description = self.description_during
--> 336 self.callback()
337 self.button.button_style = self.button_style_after
338 self.button.icon = self.icon_after

File ~\tomopyui\tomopyui\widgets\imports.py:1369, in RawUploader_SSRL62C.import_data(self)
1366 def import_data(self):
1368 tic = time.perf_counter()
-> 1369 self.projections.import_filedir_all(self)
1370 toc = time.perf_counter()
1371 self.projections.status_label.value = (
1372 f"Import and normalization took {toc-tic:.0f}s"
1373 )

File ~\tomopyui\tomopyui\backend\io.py:921, in RawProjectionsXRM_SSRL62C.import_filedir_all(self, Uploader)
917 self.selected_energies = (Uploader.energy_select_multiple.options[0],)
918 Uploader.energy_select_multiple.value = (
919 Uploader.energy_select_multiple.options[0],
920 )
--> 921 self.import_from_run_script(Uploader)
922 self.imported = True

File ~\tomopyui\tomopyui\backend\io.py:1245, in RawProjectionsXRM_SSRL62C.import_from_run_script(self, Uploader)
1243 self.flats_ind_from_collect(collect)
1244 self.status_label.value = "Normalizing."
-> 1245 self._data = RawProjectionsBase.normalize_and_average(
1246 projs,
1247 flats,
1248 darks,
1249 self.flats_ind,
1250 self.scan_info["NEXPOSURES"],
1251 status_label=self.status_label,
1252 compute=False,
1253 )
1254 self.data = self._data
1256 self.status_label.value = "Saving projections as .npy for faster IO."

File ~\tomopyui\tomopyui\backend\io.py:715, in RawProjectionsBase.normalize_and_average(projs, flats, dark, flat_loc, num_exposures_per_proj, status_label, compute)
706 # Projection locations defined as the centerpoint between two reference
707 # collections
708 # Chunk the projections such that they will be divided by the nearest flat
709 # The first chunk of data will be divided by the first flat.
710 # The first chunk of data is likely smaller than the others.
711 proj_locations = [
712 int(np.ceil((flat_loc[i] + flat_loc[i + 1]) / 2))
713 for i in range(len(flat_loc) - 1)
714 ]
--> 715 chunk_setup = [int(np.ceil(proj_locations[0]))]
716 for i in range(len(proj_locations) - 1):
717 chunk_setup.append(proj_locations[i + 1] - proj_locations[i])

IndexError: list index out of range
`

[feature] add option for simple slicing for downsampling

Data import can be time-consuming for large datasets in part due downsampling overhead. This time could be reduced if a simpler downsampling method (slicing) was implemented. This could be added as an option during import.

Histogram fails when projection data are all zeros

This code should should handle case where imagestack is constant

Line 507 in plot.py
self.hists = [
bq.Bins(
sample=self.implotter.imagestack.ravel(),
scales={
"x": self.x_sc,
"y": self.y_sc,
},
colors=["dodgerblue"],
opacities=[0.75],
orientation="horizontal",
bins="sqrt",
density=True,
)

[feature] import prenormalized data

Add feature where prenormalized data can be accepted in the form of npy or tiff files. If no metadata is present within the data folder, ask on the frontend for that metadata.

Default grayscaling of prenormalized data viewer in import tab

For “prenormalized data”, the default scaling is a bit confusing. I loaded my own fake data with a 0-20 grayscale range. At first it was completely saturated (upper left), then I clicked the “remove high and low intensities from view” button and it looks how I expected (upper right). Then I reload the default data, and again the default scaling was off (lower left) until I clicked the button (lower right). Maybe change the default scaling method when first loading data?

Screen Shot 2022-02-01 at 3 03 18 PM

Loading status indicator for imports tab

Make big teal button say “load data” or something (instead of “null”). Can there be a flashing light or progress bar or something to indicate data is loading. It can be hard to tell what’s going on when you switch datasets (especially if they’re big)
Screen Shot 2022-02-01 at 2 55 38 PM
.

Zoom/rectangular selection functionality in import tab viewer

I don’t understand the “rectangular selection” button. I can select regions but don't understand the functionality. Can you make its intended purpose clearer?
Also can you include zoom functionality in place of/in addition to the rectangular selection?
Screen Shot 2022-02-01 at 3 01 48 PM

misc suggestions

  1. voila

Have you considered also using voila to turn this into an easy to use webpage like thing for users?

  1. pre-commit.ci

enabling https://pre-commit.ci/ can be pretty nice for automated style checking and error correction

  1. build docs with readthedocs on PRs
    instructions here: https://docs.readthedocs.io/en/latest/pull-requests.html

https://github.com/samwelborn/tomopyui/blob/d3527c423290c25507768bbc2029c1bd7b5ecd09/tomopyui/widgets/meta.py#L443

Can you describe the behavior you want here? I'm definitely open to adding new features.

[feature idea] feature for shifting one projection manually

In the align tab, there could be an option where we can manually shift a single projection image.

Buttons would be "up, down, left, right," and would be on the right side of the app under altered projections. Would likely require pre-padding of the entire altered dataset so that you don't lose any image information. We also don't want to "wrap" the image around to the other side - may impact reconstruction (not completely sure about this).

Error when aligning prenormalized tiffs

I've been running into an error where I'm not able to finish aligning my tiff files, it says TypeError: NoneType object is not iterable

It writes a folder in my data directory that contains normalized_projections.hdf5, import_metadata.json, and an alignment folder which only contains a 14 kB overall_alignment_metadata.json file. Here is the full error output

`TypeError Traceback (most recent call last)
File ~\tomopyui\tomopyui\widgets\analysis.py:385, in AnalysisBase.set_options_and_run(self, change)
381 change.icon = "fas fa-cog fa-spin fa-lg"
382 change.description = (
383 "Setting options and loading data into alignment algorithm."
384 )
--> 385 self.run()
386 change.button_style = "success"
387 change.icon = "fa-check-square"

File ~\tomopyui\tomopyui\widgets\analysis.py:652, in Align.run(self)
651 def run(self):
--> 652 self.analysis = TomoAlign(self)
653 self.analysis_projections = Projections_Prenormalized()
654 self.analysis_projections.data = self.analysis.projections_aligned

File ~\tomopyui\tomopyui\backend\align.py:49, in TomoAlign.init(self, Align)
47 self.plot_output2 = Align.plot_output2
48 self.make_wd()
---> 49 self.run()

File ~\tomopyui\tomopyui\backend\align.py:260, in TomoAlign.run(self)
258 self.init_prj()
259 tic = perf_counter()
--> 260 self.align()
261 # make new dataset and pad/shift it
262 self._shift_prjs_after_alignment()

File ~\tomopyui\tomopyui\backend\align.py:159, in TomoAlign.align(self)
154 if (
155 method in astra_cuda_recon_algorithm_underscores
156 and os.environ["cuda_enabled"] == "True"
157 ):
158 self.current_align_is_cuda = True
--> 159 align_joint_cupy(self)
160 else:
161 self.current_align_is_cuda = False

File ~\tomopyui\tomopyui\tomocupy\prep\alignment.py:58, in align_joint(TomoAlign)
56 subset_x = TomoAlign.subset_x
57 subset_y = TomoAlign.subset_y
---> 58 subset_x = [int(x) + pad_ds[0] for x in subset_x]
59 subset_y = [int(y) + pad_ds[1] for y in subset_y]
60 # Initialize projection images plot

TypeError: 'NoneType' object is not iterable`

Viewing skewed aspect ratio images in import tab

Need better way to view data skewed aspect ratio (eg. when switching to sinogram). Currently, it looks like image width is held constant, and length is allowed to be as long as necessary to keep aspect ratio 1-to-1. Probably better to have a max length, and start changing the aspect ratio of images that would otherwise exceed that length. Maybe a control knob of some sort would be helpful too, not sure.
Screen Shot 2022-02-01 at 11 36 12 AM
.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.