Giter Site home page Giter Site logo

mvregfus's People

Contributors

m-albert avatar max-brambach avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

mvregfus's Issues

Fused stack cropped too much

Yo Marvin (@m-albert),

Been getting back into this with the help of a rotation student.

Having fixed the stuff discussed in #8, optimized the parameters, and switched to a beefy GPU, we now get very nice fusion quality using raw_input_binning = [1,1,1], mv_final_spacing = np.array([0.4]*3), fusion_method = 'LR' and fusion_weights = 'dct'). ๐ŸŽ‰

However, in the example we've been using for our tests, the final fused stack ends up being cropped too much; about 5% or so of the embryo get cut off on both sides along the y axis. I think this is also something I've seen before with other samples and different settings.

Any thoughts on what's happening and how to avoid it? I should note that each of our input stacks only goes about 60% through the sample, as there's no signal anymore if we go deeper than that (i.e. further away from the objective). Could this be the cause of the issue? Do we need to image all the way through in each view? Or could it be something else? Thanks!

RL does not use transpose when calculating correct

Hi Marvin

Great talking to you last night. As I mentioned I was wondering if the RL code takes the transpose of the forward model when calculating the correction here

You may have already read this, but a good explanation of how 'transpose' is used in multiview deconvolution is here. The transpose of the set of PSFs is implied (forward each PSF is applied individually to each produce an image, but inverse they are summed to create one correction factor. However you have to also reverse each PSF (complex conjugate in the frequency domain)

If you look closely at the formula as written in Wikipedia you see the flipped PSF is used when applying the correction. Of course, it doesn't make a difference if using a symmetrical PSF.

image

I also noticed this note "due to scattering, the psf strongly varies within each view" here

Technically scattering is random, not a property of the PSF. I think the way you are handling it is good (with a mask it seems), but just be careful of extracting PSFs from highly scattering images. I believe we touched on that briefly when talking this week but I have found PSFs extracted from biological samples to be unreliable because of uneven background. It seems reasonable that a PSF extracted from nearby the sample would be the most accurate (because it by definition is taken under the exact same conditions as the sample), however if a random inconsistency causes a poor PSF, the poor PSF may not be reliable even in nearby locations, and wouldn't be reliable to use for other images.

A final note, I checked over https://github.com/PreibischLab/multiview-reconstruction looking for the part where they do the update, it seems they refactored the code a lot, and I think it does do the 'flipped' PSF properly when calculating the correction factor. However I noticed, as you guys mentioned, there is the option of calculating the combined correction factor with a multiplication. That seems like it could cause noise amplification, especially if there is lots of scatter. I haven't tried that method, but there was recently an acceleration method published in Nature Methods that many people found caused issues after a few iterations see this discussion. I wonder if something similar could happen with multiplicative acceleration

Getting in touch

Hi Marvin,

I didn't know a better way to contact you, but I'd love to talk more about this registration code. I have built some very similar tools using dask to massively parallelize SimpleITK functions over blocks of big arrays: https://github.com/GFleishman/CircuitSeeker (this is not at all up to date with my local copies of the repo). It might be really useful for us both to compare notes and see what's duplicated, what could be combined, and otherwise how we can help each other. Will you write to me?

fleishmang [at] janelia [dot] hhmi [dot] org

Thanks,
Greg

Fusion of single-view data?

Hi all,

Super exciting to find this project! We are acquiring large light sheet data sets using a high NA oblique plane microscope (publication). We normally use BigStitcher to stitch and fuse our data. However, we acquire our data in Python and it would be great to stay in the eco-system. We are also searching for faster ways to fuse our laterally very large images with shallow z depth.

Is it possible to fuse multiple tiles (>30 tiles) from a single-view at the moment? If so, mind giving us a hint what functions to look at to get started?

Thanks!
Doug

How can I treat illuminations independently on Zeiss Z.1 LS

Hi,
Thanks a lot for your great tool!
One question: is it possible to fuse the different illuminations (left and right lightsheet) independently? So basically to treat then as 'views' instead of fusing them before registering the views.
Best,
Max

Fusion crashes for very large images

Hi,
When I try to fuse very large images (4 volumes with 4 views each consisting of ~600x2000x2000px; ca. 40GB) I get a Memory Allocation Error. When I use smaller files, it works like a charm.
Is there a way to also fuse files that are too large for my memory?
Best,
Max

'Array' object has no attribute 'origin'

Hi Marvin,

I wanted to try this out for some test data I've recently collected, but ran into some trouble.

First, as an aside, installation of the dependencies using conda (freshly installed and updated) on our instrument's Windows 10 Enterprise LTSC processing PC did not work using conda env create --file MVRegFus/mv_environment.yml (got stuck at solving environment, same with mv_environment_windows.yml). I had to manually make an empty env (conda create -n mvfusreg python=3.7) and then install the dependencies step by step for it to work. Could be because conda is now on python 3.9, but note that conda env create --file MVRegFus/mv_environment.yml python=3.7 also didn't work.

Second, and more importantly, now that the installation is complete I am trying to run it with the configurations as pasted below. Not sure if everything is correct; the input file is a single-timepoint, 3-color (1st channel DAPI), 3D stack with 3 views (each rotated by 120 degrees) acquired using ZEN 3.1 (Blue) version 3.1.0.00002. I'd like to register and fuse views 1 and 2 onto view 0.

Please let me know if there's something obviously wrong in the configurations. See below for the error I get when I run it.

Configurations
##########################
#### parameters to modify
#### approx. in descending order of relevance
##########################

# where elastix can be found (top folder)
elastix_dir = r'D:\SWAP\Jonas\_software\elastix'

# list of files to fuse
filepaths = [r'D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi']

# where to save the output
out_dir = os.path.dirname(filepaths[0])  # e.g. same folder as first file in filepaths

# channels to fuse
channels = [0,1,2]
channelss = [channels]*len(filepaths)

# channel to use for registration
reg_channel = 0
reg_channels = [reg_channel] *len(filepaths)

# reference view for final fusion
ref_view = 0
ref_views = [ref_view] *len(filepaths)

# list of pairwise view indices to perform registration on
registration_pairs = [[0,1], [0,2]]
#registration_pairs = None
registration_pairss = [registration_pairs] *len(filepaths)

# optionally, specify the meanings of the indices
# occuring in the list of pairs
# this can be used to fuse illuminations independently
# using view_dict, which is a dictionary containing
# the indices as keys and dictionaries defining the indices as items, such as:
# >>> view_dict[0] = {'view': 0 # view 0 within the file
#                     'ill' : 1 # illumination 1 within that view
#                     }
# another example:
# >>> view_dict[0] = {'view': 0    # view 0 within the file
#                     'ill' : None # like this, both ills of this view are fused using the average of ills
#                     }
# another example:
# >>> view_dict[0] = {'view': 0    # view 0 within the file
#                     'ill' : 2    # like this, both ills of this view are fused using blending weights
#                     }

# in case of treating ills as independent views:
#              - illumination 0 comes from left
#              - illumination 1 comes from right
#              - rotating in positive direction (in angles)
#                brings left to the front
# so it makes sense to define the registration pairs like this: (view, ill)
# (0,1),(0,0)
# (0,0),(1,1)
# (1,1),(1,0)
# (1,0),(2,1)
# etc.

# four view example:
view_dict = {i:{'view':i, 'ill': 0} for i in [0, 1, 2]}

# if ills of all views should be averaged, set view_dict to None:
#view_dict = None

# how to calculate final fusion volume
# 'sample': takes best quality z plane of every view to define the volume
# 'union': takes the union of all view volumes
final_volume_mode = 'sample'

# whether to perform an affine chromatic correction
# and which channel to use as reference
perform_chromatic_correction = False
ref_channel_chrom = 0

# binning of raw input from views (x,y,z)
# [1,1,1]: no binning
# shapes of views to be registered should not significantly
# exceed ~(400, 400, 400)
raw_input_binning = [5,5,2]

# background level to subtract
background_level = 200

# which binning to use for registration
#mv_registration_bin_factors = np.array([1,1,1])
mv_registration_bin_factors = np.array([4,4,4])

# registration mode for pairwise view registration
# (default is 2)
# -1: only preregistration (translation, no elastix)
# 0: only translation
# 1: translation + rotation
# 2: translation + rotation + affine
pairwise_registration_mode = 2

# final output spacing in um
mv_final_spacing = np.array([1.]*3)

# options for fusion
# fusion_method
# 'weighted_average': weighted average of views using the given weights
# 'LR': Lucy-Richardson multi-view deconvolution
fusion_method = 'LR'
# fusion_method = 'weighted_average'

# fusion weights
# 'blending': uniform weights with blending at the stack borders
# 'dct': weights derived from DCT image quality metric
# fusion_weights = 'dct'
fusion_weights = 'blending'

# options for DCT image quality metric for fusion
# setting None automatically calculates good values

# size of the cubic volume blocks on which to calc quality
dct_size = None
# size of maximum filter kernel
dct_max_kernel = None
# size of gaussian kernel
dct_gaussian_kernel = None

# weight normalisation parameters
# normalise such that approx. <dct_cumulative_weight_best_views> weight is
# contained in the <dct_how_many_best_views> best views
dct_how_many_best_views = 2
dct_cumulative_weight_best_views = 0.9

# options for weighted Lucy Richardson multi-view deconvolution
# maximum number of iterations
LR_niter = 25  # iters
# convergence criterion
LR_tol = 5e-5  # tol
# gaussian PSF sigmas
LR_sigma_z = 4  # sigma z
LR_sigma_xy = 0.5  # sigma xy


##########################
#### end of parameters to modify
##########################
Output and errors
(mvregfus) D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw>python mvregfus_run.py
http://localhost:8787
D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\numpy\core\fromnumeric.py:86: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
  return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
These pairs of keys will be registered:
[[0, 1], [0, 2]]
They refer to the keys in 'view_dict':
{0: {'filename': 'D:\\SWAP\\Jonas\\20221013 - LS_Z1 test - '
                 'DAPI-Sox10nRmG-mitfaHCR\\tp3_e1-1\\raw\\tp3_e1-1.czi',
     'ill': 0,
     'view': 0},
 1: {'filename': 'D:\\SWAP\\Jonas\\20221013 - LS_Z1 test - '
                 'DAPI-Sox10nRmG-mitfaHCR\\tp3_e1-1\\raw\\tp3_e1-1.czi',
     'ill': 0,
     'view': 1},
 2: {'filename': 'D:\\SWAP\\Jonas\\20221013 - LS_Z1 test - '
                 'DAPI-Sox10nRmG-mitfaHCR\\tp3_e1-1\\raw\\tp3_e1-1.czi',
     'ill': 0,
     'view': 2}}
INFO: setting registration degree to 2 (trans+rot+aff)
Skipping groupwise registration!
DECORATOR local... readStackFromMultiviewMultiChannelCzi
producing D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\mv_input_view_000_000_v000_c02.zarr
reading D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi view 0 ch 2 ill 0
Binning down raw input by xyz factors [5, 5, 2]
old shape: 640 1920 1920
new shape: 320 384 384
DECORATOR local... readStackFromMultiviewMultiChannelCzi
producing D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\mv_input_view_000_000_v001_c02.zarr
reading D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi view 1 ch 2 ill 0
Binning down raw input by xyz factors [5, 5, 2]
old shape: 473 1920 1920
new shape: 236 384 384
DECORATOR local... readStackFromMultiviewMultiChannelCzi
producing D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\mv_input_view_000_000_v002_c02.zarr
reading D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi view 2 ch 2 ill 0
Binning down raw input by xyz factors [5, 5, 2]
old shape: 477 1920 1920
new shape: 238 384 384
DECORATOR local... readStackFromMultiviewMultiChannelCzi
producing D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\mv_input_view_000_000_v001_c01.zarr
reading D:\SWAP\Jonas\20221013 - LS_Z1 test - DAPI-Sox10nRmG-mitfaHCR\tp3_e1-1\raw\tp3_e1-1.czi view 1 ch 1 ill 0
Binning down raw input by xyz factors [5, 5, 2]
old shape: 473 1920 1920
new shape: 236 384 384
DECORATOR local... bin_stack
Traceback (most recent call last):
  File "mvregfus_run.py", line 235, in <module>
    results.append(io_utils.get(graph, result_keys[i:i + N * len(channels)]))
  File "d:\swap\jonas\_software\mvregfus\mvregfus\io_utils.py", line 31, in get
    return dask.local.get_sync(cgraph,key)
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\dask\local.py", line 558, in get_sync
    **kwargs,
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\dask\local.py", line 496, in get_async
    for key, res_info, failed in queue_get(queue).result():
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\concurrent\futures\_base.py", line 428, in result
    return self.__get_result()
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\concurrent\futures\_base.py", line 384, in __get_result
    raise self._exception
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\dask\local.py", line 538, in submit
    fut.set_result(fn(*args, **kwargs))
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\dask\local.py", line 234, in batch_execute_tasks
    return [execute_task(*a) for a in it]
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\dask\local.py", line 234, in <listcomp>
    return [execute_task(*a) for a in it]
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\dask\local.py", line 225, in execute_task
    result = pack_exception(e, dumps)
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\dask\local.py", line 220, in execute_task
    result = _execute_task(task, data)
  File "D:\SWAP\Jonas\_software\Miniconda3\envs\mvregfus\lib\site-packages\dask\core.py", line 119, in _execute_task
    return func(*(_execute_task(a, cache) for a in args))
  File "d:\swap\jonas\_software\mvregfus\mvregfus\io_utils.py", line 514, in full_func
    result = func(*nargs, **kwargs)
  File "d:\swap\jonas\_software\mvregfus\mvregfus\multiview.py", line 444, in bin_stack
    origin = im.origin
AttributeError: 'Array' object has no attribute 'origin'

Any ideas? ๐Ÿ™ƒ

Reading in non-CZI data

We also have a second, but separate question. We typically save our data as TIFFs and then convert to the BigDataViewer (BDV) H5 format, using npy2bdv. Since we can read our data in as NumPy or a Dask array, we are also curious if you could give a hint for the hooks where can feed data into the registration and fusion routine that are not CZI files?

Thanks!
Doug

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.