Giter Site home page Giter Site logo

readlif's People

Contributors

ilorevilo avatar jojoelfe avatar nimne avatar tmtenbrink avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

readlif's Issues

Feature request: read parent/child relation for scene names

This may be related to the existing issue
#19
I'm also using LAS X Navigator and multi-well tile scans.
These can be continuous—mosaic merges—or not.
Originally I posted this to aicsimageio because that's what I was using to get my image data into python, but now I understand it is a better fit here. Here's the original issue:
AllenCellModeling/aicsimageio#277
So using Navigator it's easy to do multi-well experiments, but then when importing names don't get read in properly, which makes everything confusing.

Here's a link to a representative LIF:
https://www.dropbox.com/s/jk9se8i1kpzqvsn/20210428_24w_L929_Ho_B2C3.lif?dl=0
Just as an example, we can look at the last section:
B is row B, then B/n/ are wells in the B row. R1-R5 are actually mosaic merges, but that's not material to this issue. Same with C/
At the end is B/3/R1-R60 which ideally would be able to be imported together.

ReadMyLIFs by @mutterer does this perfectly:
124980983-54787100-e035-11eb-934c-1199d31ccd0e
It's easy to import all the regions together (this is like a mosaic, except scattered in a well).
I know that is using bio-formats but perhaps that's possible here, to pass on to aicsimageio?

At the moment aicsimageio ignores the structure of the project, so I get many R1, etc. and can't call the scenes by name as a result (now can use indexes, so it works, but less than ideal, but without the well context, everything is more tricky).

Here's the xml for this LIF
big-lif.xml.zip

Here's the start of the last section of images:
<Attributes><Attribute>ExtendedMemoryBlock</Attribute><Attribute>_DCSERVER_PREVENT_MEMKILL</Attribute></Attributes><Memory Size="2764800" MemoryBlockID="MemBlock_1050" /><Children /></Element><Element Name="B3 60 10x" Visibility="1" CopyOption="1" UniqueID="a6f972e2-a82c-11eb-bc72-a4bb6dca3902"><Data><Collection ChildTypeTest="AcceptAll"><ChildTypeList /></Collection></Data> <Memory Size="0" MemoryBlockID="MemBlock_1319" /><Children><Element Name="B" Visibility="1" CopyOption="1" UniqueID="a6f972e3-a82c-11eb-bc72-a4bb6dca3902"><Data><Collection ChildTypeTest="AcceptAll"><ChildTypeList /></Collection></Data> <Memory Size="0" MemoryBlockID="MemBlock_1320" /><Children><Element Name="3" Visibility="1" CopyOption="1" UniqueID="a6f972e4-a82c-11eb-bc72-a4bb6dca3902"><Data><Collection ChildTypeTest="AcceptAll"><ChildTypeList /></Collection></Data> <Memory Size="0" MemoryBlockID="MemBlock_1321" /><Children><Element Name="R1" Visibility="1" CopyOption="1" UniqueID="a6f972e5-a82c-11eb-bc72-a4bb6dca3902"><Data><Image TextDescription=""><Attachment Name="TileScanInfo" Application="LAS AF" FlipX="0" FlipY="0" SwapXY="0"><Tile FieldX="0" FieldY="0" PosX="0.0534542067" PosY="0.0256040830" />

You can see
<Element Name="B3 60 10x" Visibility="1" CopyOption="1" UniqueID="a6f972e2-a82c-11eb-bc72-a4bb6dca3902">
And then Children:

<Children><Element Name="B" Visibility="1" CopyOption="1" UniqueID="a6f972e3-a82c-11eb-bc72-a4bb6dca3902">
<Children><Element Name="3" Visibility="1" CopyOption="1" UniqueID="a6f972e4-a82c-11eb-bc72-a4bb6dca3902">
<Children><Element Name="R1" Visibility="1" CopyOption="1" UniqueID="a6f972e5-a82c-11eb-bc72-a4bb6dca3902">

I'd love to be able to see those parent names, so something like:
"B3 60 10x B3 R1"
In an ideal world, because R1 has the "TileScanInfo" bit, the scene would be "B3 60 10x B3" and then all the R1-60 would be part of that as M values—akin to a mosaic, even though they are not touching.

I think readlif is already reading that info?

def _recursive_image_find(self, tree, return_list=None, path=""):

But I'm 100% a python noob and can't quite figure out what's going on.
I'm willing to help, hack, or test, but need a bit of guidance.
Thanks for making a great package! 😍

Possible problem with reading images triggered from Las X stage navigator

Hi Nick,

First of all, thank you for all the work you have done on this awesome package, I love it! I use it every day for my image analysis, and it works exactly as it should. However, since I now try to automate my image acquisition, I run into a major issue. I have to say that I don't know if the mistake originates in the Las X software or in readlif, but I thought it was worth giving it a try.

For the automation of the image acquisition, I want to use the Las X stage navigator, which can automatically control the movement of the stage. There is not much different in the imaging protocol compared to the protocol I normally use, there really is one difference: if I want to use the stage navigator, I have to trigger the image acquisition from the stage navigator.

In the Las X software, the image is perfectly fine, and I have no problems visualizing it. However when I load the image using readlif, something strange happens. It appears that the 2 channels I currently use fuse together, making image acquisition impossible.

I added 2 images to show the differences:
Image 1: a normal image (made without the stage navigator)
Screenshot from 2021-03-08 17-05-05

Image 2: an image made using the stage navigator
Screenshot from 2021-03-08 17-05-35

Using a very nice 3d viewer called Napari, you are able to see that there is a sort of organization in the way the channels are fused: it appears that the nuclei (small) are located on top of the fat droplets (larger).
Screenshot from 2021-03-08 17-10-35

I would really appreciate if you could take a look at this. When you are able to, I'll send the example Lif file to you via twitter again, just like last time!

Cheers,
Dirk

The script I used:

from PIL import Image
from readlif.reader import LifFile
import numpy as np
import napari
%gui qt5

# Define lif file location
RawData= LifFile('The_Lif_Example.lif')

# get a list of all images and a list of numbers to iterate over.
lif_image_list = [i for i in RawData.get_iter_image()]
Number_of_lif_images = len(lif_image_list)
Lif_Image_Number_List = list(range(0,Number_of_lif_images))

for i in Lif_Image_Number_List:
    RawData_Image = RawData.get_image(i)
    Image_Name = RawData_Image.name
    print(Image_Name)
    
    # Explore the image dimensions
    frame_list   = [i for i in RawData_Image.get_iter_t(c=0, z=0, m=0)]
    z_list       = [i for i in RawData_Image.get_iter_z(t=0, c=0, m=0)]
    channel_list = [i for i in RawData_Image.get_iter_c(t=0, z=0, m=0)]
    mosaictile_list = [i for i in RawData_Image.get_iter_m(t=0, z=0, c=0)]
    # As can be observed, these are both similar for both the normal image and the image acquired with the stage navigator.
    #print(frame_list)
    #print(z_list)
    #print(channel_list)
    #print(mosaictile_list)
    #print()
    
    # get additional information:
    Info = RawData_Image.info
    # As can be observed, also this looks similar for both images
    #print(Info)
    #print()
    
    # Now lets work with the images:
    # Thresholding for every z-stack:
    Z_Number_List = list(range(0, len(z_list)))
    
    for Z_stackNr in Z_Number_List:
        # Collect the two channels of interest. (assumes that channel 0 == nucleus signal and channel 1 == fat signal)
        Nuc_Image_V1 = RawData_Image.get_frame(z=Z_stackNr, t=0, c=0, m=0)
        Fat_Image_V1 = RawData_Image.get_frame(z=Z_stackNr, t=0, c=1, m=0) 

        # Transform the images into workable numpy arrays (the images are already 8 bit so they don't need correction for this)  
        Nuc_img_array = np.uint8(np.array(Nuc_Image_V1))
        Fat_img_array = np.uint8(np.array(Fat_Image_V1))
        
        Nuc_Image = Image.fromarray(Nuc_img_array, 'L')
        Fat_Image = Image.fromarray(Fat_img_array, 'L')
        
        Nuc_Image = np.uint8(Nuc_Image)
        Fat_Image = np.uint8(Fat_Image)
        
        # Make a 3d numpy array of the image and collect in a variable.
        if Z_stackNr == 0:
            Nuc_Raw_Image_Array_3D = Nuc_Image
            Fat_Raw_Image_Array_3D = Fat_Image
        else:
            Nuc_Raw_Image_Array_3D = np.dstack((Nuc_Raw_Image_Array_3D, Nuc_Image))
            Fat_Raw_Image_Array_3D = np.dstack((Fat_Raw_Image_Array_3D, Fat_Image))
            
    # now that we have made a 3d numpy array, lets visualize what we have:
    Dimension_Scale = abs(np.asarray(RawData_Image.info['scale'][0:3]))
    scale_1 = ((Dimension_Scale[0]/Dimension_Scale[0]),(Dimension_Scale[0]/Dimension_Scale[1]),(Dimension_Scale[0]/Dimension_Scale[2]))
    
    if i == 0:
        with napari.gui_qt():
            x = napari.Viewer(ndisplay=3)
            x.add_image(Nuc_Raw_Image_Array_3D, scale=scale_1, colormap="blue", blending='additive')
            x.add_image(Fat_Raw_Image_Array_3D, scale=scale_1, colormap="green", blending='additive')
    else:
        with napari.gui_qt():
            y = napari.Viewer(ndisplay=3)
            y.add_image(Nuc_Raw_Image_Array_3D, scale=scale_1, colormap="blue", blending='additive')
            y.add_image(Fat_Raw_Image_Array_3D, scale=scale_1, colormap="green", blending='additive')

Uncompatible with newest version of LasX acquired images (version 4.3.0.24308)

Images acquired normally (not using the LasX navigator) cannot be opened anymore.

Error that is printed:
ValueError: Number of images is not equal to number of offsets, and this file does not appear to be truncated. Something has gone wrong.

This error originates from line 739 of "reader.py".

I tested opening lif files acquired with an older version of LasX to ensure the issue was not dependent on changes in my environment. I can still read the "old" files, but not the "new" files, to which the only difference is the version of LasX.

Unfortunately I don't have the XML files for neither of the lif files, let me know if this is required and I'll try to find some other files.

Thanks for the effort!

Cheers,
Dirk

GitHub Actions CI

Are there any plans to shift from Travis to GitHub Actions for CI? It would enable you to set up more comprehensive build matrices that also include macOS runners, for example. I'd be happy to submit a PR for this if you think it's worthwhile!

Timestamps

Hello, do you know if there are timestamps in the metadata anywhere?

Many thanks for the help.

do you plan to add 32bit image support?

I have a file (which I can send you ) that is a mix of 8,16 and 32 bit image. In the file loaded below image 0 is 8bit, image 1 is 32 and image 2 is 16

Your reader handles the 8 and 16 bit fine but I get the report error to github message
(base) PS E:\users\klingr\IDL_nih\readlif-0.6.5\readlif\testingLIFreader> python
Python 3.9.16 (main, Mar 8 2023, 10:39:24) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

from reader import LifFile
new = LifFile('Example_FLIM.lif')
im = new.get_image(0)
im.get_frame(z=0, t=0, c=0)
<PIL.Image.Image image mode=L size=512x512 at 0x2AD0E3AE0A0>
im = new.get_image(1)
im.get_frame(z=0, t=0, c=0)
Traceback (most recent call last):
File "", line 1, in
File "E:\users\klingr\IDL_nih\readlif-0.6.5\readlif\testingLIFreader\reader.py", line 380, in get_frame
return self._get_item(item_requested)
File "E:\users\klingr\IDL_nih\readlif-0.6.5\readlif\testingLIFreader\reader.py", line 182, in _get_item
raise ValueError("Unknown bit-depth, please submit a bug report"
ValueError: Unknown bit-depth, please submit a bug report on Github
im = new.get_image(2)

Accessing wavenumbers in multiwavenumber measurements

Hi,
i have a .lif file with 10 different measurements, which consists off 33 images (each one referring to one different wavenumber). Now I would like to gather all these data information. I suspect the number of timestamps beeing the relevant parameter after inspecting the header and the github code. Also, I've found the NumberofTimeStamps beeing equal 132, which would fit to 4 channels with 33 wavenumbers.

However - after loading the file - the number of timestamps is equal one for all measurments. Now my question is, if it is even possible to load these multi-wavelength images and if so, how I should do it.

Example code:

from readlif.utilities import get_xml
from readlif.reader import LifFile

new = LifFile(file_path)
img_0 = new.get_image(0) # first image in the file consting of 10 images

frame_list = [i for i in img_0.get_iter_t(c=0, z=0)]

print(img_0)

LifImage object with dimensions: Dims(x=1024, y=1024, z=1, t=1, m=1)
print(get_xml(file_path)[1])
.... /ImageDescription TimeStampList NumberOfTimeStamps="132"> .....

Thank you in advance.

Convenience function for retrieving numpy arrays?

Hi @nimne ,

I was just using your readlif library and I was wondering if there is a way for retrieving a 3D stack from a .lif file in a convenient way. I wrote this function:

def lif_to_numpy_stack(lif_image):
    num_slices = lif_image.info['dims'].z
    
    return np.asarray([np.array(lif_image.get_frame(z=0)) for z in range(num_slices)])

And I'm now wondering if readlif already contains such a convenience function and/or if I should send a pull-request with it. Otherwise, might it make sense to add an example for reading a 3D stack to the readme?

Thanks!

Best,
Robert

Associate image metadata with LifImage, XZ plane error

Tried this package after having a hard time getting started with python-bioformats, but I didn't get very far.

It appears that item.find for getting dim_y is returning None

Traceback (most recent call last):
  File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.3.5\plugins\python-ce\helpers\pydev\pydevd.py", line 1477, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "C:\Program Files\JetBrains\PyCharm Community Edition 2018.3.5\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "C:/Users/backman05/Documents/Bitbucket/Confocal_NN_SurfaceTracking/src/fileHandling/lif.py", line 12, in <module>
    f = LifFile(filePath)
  File "C:\Users\backman05\Anaconda3\envs\confocal\lib\site-packages\readlif\reader.py", line 552, in __init__
    self.image_list = self._recursive_image_find(self.xml_root)
  File "C:\Users\backman05\Anaconda3\envs\confocal\lib\site-packages\readlif\reader.py", line 353, in _recursive_image_find
    self._recursive_image_find(item, return_list, appended_path)
  File "C:\Users\backman05\Anaconda3\envs\confocal\lib\site-packages\readlif\reader.py", line 365, in _recursive_image_find
    dim_y = int(item.find(
AttributeError: 'NoneType' object has no attribute 'attrib'

On closer inspection this is occurring for an image that is an XZ scan. Adding the same try:except handling that is used for other dimensions seems to have fixed it.

XT scan

Hi,
Would it be possible to implement reading xt scan?
Thanks.
Petro

pixel size / scale is not in 1/nm but in 1/µm

First of all, thanks for the great work! Very useful project.

Lengths are given in the in the xml metadata in units of meters, and multiplied by 10**6 in the code for calculating the scale attribute -making it micrometers-, whereas the documentation notes that they are given in nanometers.

Question/suggestion : returning LifImage objects as NumPy arrays.

The script I'm currently working with uses imageio (https://github.com/imageio/imageio) to read .tif files and return them as NumPy arrays containing pixel data for each individual frame.

NumPy provides a function to turn a Pillow object into an array, but as I understand it only individual frames obtained with, for example, get_iterm_m, are Pillow objects.

For now, the only solution I see is, for each LifImage object, have a frame by frame loop which appends an already existing array with data from said frame, while adding a column that stores frame numbers.

Is there, or could there be, a more elegant way to turn LifImage objects directly into NumPy arrays ?

Issues with readlif and RGB camera

Hello,

I've been using this package for a while to extract images from .lif generated by a Leica microscope with a monochrome camera. Thank you very much for the work done!

Recently, I moved to a different setup with a RGB camera. I have been unable to successfully extract images with readlif from the .lif obtained on this setup. The images obtained seem to have a smaller field of view than what they should have, and seem to mix different area on a single image. I went back to the basics and used the examples of codes provided, but the issue persisted. I've attached an example of such image for reference. Note that I can open these .lif with no issues with ImageJ.

I am not very knowledgable with images and image formats, but I believe that the issue might arise from the fact that the images taken by the RGB camera are interleaved, and therefore they are not correctly divided into channels as they should. This hypothesis is based on the fact that there is a parameter "interleaved" in the .lif file that I was able to check with imageJ.
image18

Failed to load tiled image - Unknown bit depth

Hello Nick

I'm trying to use readlif to import a lif file that contains a series of 82 images. That's 9x9 tiles (each 1024x1024) plus the stitched version. Each image is 12-bit with 2C, 6Z & 1T.

Below is my code:

from readlif.reader import LifFile

new = LifFile(image_path)
img_0 = new.get_image(0)
img_list = [i for i in new.get_iter_image()]

print(img_0)
print(img_list)
print(img_list[0].get_frame(z=0, t=0, c=0))

And the outputs and error message:

<__main__.LifImage object at 0x0000023433866D08>
[<__main__.LifImage object at 0x0000023433866588>]
Traceback (most recent call last):
  File "C:/Users/dhayes/Dropbox/PyCharmProjects/Erik_Sam_NeuriteLength/NeuriteLengthAnalysis.py", line 539, in <module>
    print(img_list[0].get_frame(z=5, t=0, c=1))
  File "C:/Users/dhayes/Dropbox/PyCharmProjects/Erik_Sam_NeuriteLength/NeuriteLengthAnalysis.py", line 163, in get_frame
    return self._get_item(item_requested)
  File "C:/Users/dhayes/Dropbox/PyCharmProjects/Erik_Sam_NeuriteLength/NeuriteLengthAnalysis.py", line 126, in _get_item
    raise ValueError("Unknown bit-depth, please submit a bug report"
ValueError: Unknown bit-depth, please submit a bug report on Github

Although the .lif file contains a series of many images, get_iter_image() only returns one.

Looking a bit into the code, I found that within the _get_item() function where this traceback is generated, len(data) has a value of 169869312 which equals 1024x1024x81x2 (with self.dims = (1024, 1024, 6, 1) and self.channels = 2). I'm guessing data combines all 81 tiles across both channels for a given z? I can therefore reshape to get my images, but does readlif have any function to directly extract series of images from a single .lif file?

Thanks for any help you have!

additional dimensions (excitation and emission wavelength)

In addition to the currently supported dimensions, data may be recorded with varying detection and excitation wavelengths. These are denoted by DimID 5 and 9 in the xml metadata. It would be nice and I think fairly straightforward to implement these. I am not aware what DimID's 6-8 are used for, but it may be documented somewhere.

DimID dimension unit
1 x-axis m
2 y-axis m
3 z-axis m
4 time s
5 detection wavelength m
6-8 ?
9 illumination/excitation wavelength m
10 mosaic tile

I can provide example files for wavelength sweep data, if this would be helpful.

ValueError : I/O operation on closed file

I installed readlif on Python 3.7.4, and whenever I try to use the most basic function, I get the following error :

Exception has occurred: ValueError
I/O operation on closed file

My code is simply new = LifFile(file_path)

XML file attributes

Hi, Not sure if this is related to the other xml questions on metadata.

There's doesnt seem to be much info (and not blaming anyone because theres always lots to do).

I just wondered if you could explain the xml access in this library. I just wondered, if you are someone who isnt sure whats in the lif file or have many. Can we parse out the different attributes to move to other file formats such as numpy arrays/zarr or xarray. Pyimagej makes a bit of a hash of it at the moment.

Hoping you can provide some guidance.

Unexpected behavior of get_frame args

Hi Nick,

Thank you for creating this! I'm not a trained programmer and I apologize if I'm simply misusing your package. Here's my issue:

Create a basic object and load the second series:

lif = './data/LIF_FILE_TWO.lif'
raw_data = LifFile(lif)
img_1 = raw_data.get_image(1)

The following should display a grayscale image of the first channel of the first z-slice in my series, and it does:
img_1.get_frame(z=0, t=0, c=0, m=0).show()

Changing c=0 to c=1 does not display the second channel. What I see is the second z-slice of the first channel.
img_1.get_frame(z=0, t=0, c=1, m=0).show()

In order to see the second channel, I need to increment the value for z=*.
img_1.get_frame(z=1, t=0, c=0, m=0).show()

I wasn't able to upload my LIF file, likely due to size, but I'd be happy to share it with you.

[Bug/Docu]Image Info is not attribute

I try to access the Series Name of the images.

I can view the Series name via ImageJ
ImageJ example

The documentation recommends this code:

for image in new.get_iter_image():
    for frame in image.get_iter_t():
         frame.image_info['name']

Running the code results into a AttributeError: 'Image' object has no attribute 'image_info'

How can I access the name of my Image Series?

License change to MIT / BSD

Hey @nimne, just curious if you would be willing to change the license of this repo to MIT. I don't see anything in the repo that depends on other GPL code so I don't believe there would be any issue there.

On my end this isn't an issue but downstream users care. I was just going through the readers in aicsimageio and I believe I need to add some comments in the README that say "'LifReader' and 'CziReader' are licensed under GPL" for example.

Snapshot images are largely different to images from LAS

Hello,

I am using readlif on .lif files from Confocal laser scanning fluorescence microscope (Leica Microsystems SP5).

Although I am getting most of the images as same as from LAS, only snapshot images are greatly different to the images from LAS.

Below, I listed the images from attached .lif file on LAS and from readlif.
(Please ignore the colors, as I have made the readlif images in gray-scale.)

Please help me to figure out the reason for this difference and how to get the images similar to LAS from readlif as well.

Thank you!

In LAS:
SP5_snpshot_1_LAS

from readlif
SP5_snpshot_1_Series010Snapshot All2_t0_c0_z0
SP5_snpshot_1_Series010Snapshot All2_t0_c1_z0
SP5_snpshot_1_Series010Snapshot All2_t0_c2_z0

[Duplicate] [Bug] Increment z jumps in channels

I experienced a starting mapping for the get Frame function.

executing this code:

new = LifFile(file)
for img in new.get_iter_image():
    slice1 = img.get_frame(z=1,t=0, c=0)
    slice2 = img.get_frame(z=imgContaine.channels,t=0,c=0)

slice1 is z=0, c=1 instead of z=1. and
slice2 is z=1, c=0.

I could fixed that behavior with this method:

def get_frame(imgContaine, z,c):
    i= imgContaine.channels*z+c
    z = i%imgContaine.nz
    c = i//imgContaine.nz
    return imgContaine.get_frame(z=z,t=0, c=c)

My images are recorded by Leica Microsystems LAS AS - TCS SP5.
I am on readlif Version 0.6.5 installed via pip in a venv.
ImageJ does open my images correct.

Error on parameter LifImage.scale[2]

Hello,
Using the readlif library, I found an error on the parameter LifImage.scale[2] which returns the measurement of a pixel in micrometers for the depth z of the image. Indeed, when we calculate the value 1/LifImage.scale[2], we do not get the value given by Las X in the image parameters. If I understood correctly, LifImage.scale[2] is calculated by dividing the number of points z by the depth traveled in micrometers. In reality, the number of z minus 1 should be divided by the depth traveled in micrometers, because the number of z intervals should be counted, not the number of z points. When I calculate by hand the number of points z - 1 / depth traveled in micrometers, I find the value given by Las X. I think that this is an error in the LifImage.scale[2] parameter which should be obtained in the way described above.
If however I had misunderstood, I would like your explanations!
Thank you for all the work done,
Sincerely,
Alexandre.

Inconsistent treatment of dimensions

I was wondering why the LifImage class has nz and nt attributes, but not e.g. nx or ny attributes, although it is of course trivial to obtain these from LifImage.dims.

Perhaps it is related to the comment on line 342 of reader.py:
# Don't need a try / except block, all images have x and y
I am not sure this is necessarily true. Our (Leica) confocal microscope for example can record directly in the xz plane, without scanning along the y direction. I suspect (but I haven't yet checked) it stores these not as nx×1 images but rather as nx×nz images.

I intend to check this soon, and could provide an example file of this, if it would be useful.

Can I save a single .lif image into a new lif file?

I have a large LIF file and I want to extract only a few images. I have managed to select the images that I want, but I have been unable to find a way to save them as individual lif files. It is possible for me to transform them into Pillow objects and then save them, but this leads me to some format issues (I have 4-channel RGB-Gray images). Is there a simple way save a lif image after extracting it from the list?

The readout differences between readlif and LAS X navigator

I know that readlif has the access to .lif file. When I try to use numpy to process the readout of .lif file, I find it is different from the LAS X navigator.
Below is the image of LAS X, which is setting the 1st channel with Z projection (MAX intensity) method.
LAS1
And below is the image I got with readlif and numpy.
index_1_sub0_binary_0
The spot is similar but the details in the pixel are different.

Datetime

Hi!

I am trying to extract the exact date and time for each acquisition.
The xml file does not contain it, nor the information dict.

Can you help me?

Error in scaling factor calculations

I have a question concerning image scaling calculated starting in reader.py line 617.

On lines 628 and 632, the math is currently as follows: $[scale] = \frac{ [NumberOfElements] - 1}{[Length]}$

Why is the numerator $([NumberOfElements] - 1)$? As best I can tell, it should be only $[NumberOfElements]$.

When exporting images/videos through the Leica software, the included metadata calculates $Voxel = \frac{[Length]}{[NumberOfElements]}$, where voxel is equivalent to $1/[scale]$. There is no "-1" in the exported metadata.

The "-1" in readlif's code seems to introduce error into the scaling factors - is there something I am missing?

I have problems loading a 4 channel image

Hello. I'm using readlif, and it looks like a wonderful library. However since I started uploading 4 channel images I have had problems. The problem is that when I do the maximum projection, the image does not match the one obtained by ImageJ. I am using the library as follows:

img_0 = new.get_image(0)
z_list = np.array([np.array(i) for i in img_0.get_iter_z(t=0, c=0)])

Notice that I am converting the output to a Numpy array. The maximum projection is done as follows:

max_projection = z_list.max(axis = 0)

It does not matter which channel I select (0,1,2,3), when I do the maximum projection, it does not correspond to the one thrown by ImageJ. I clarify that this does not happen if the image has less than 4 channels.

I would greatly appreciate a suggestion.

Best regards, Rosalio.

Strange lines in picture

Hello Nick,

I'm trying to open a lif file and look at the image afterwards, but some strange lines appear on the image. Is this incompatibility with the image format/version or am I missing something else? I added an example of what I'm seeing. The used code is posted below.
image

from readlif.reader import LifFile
new = LifFile('sample.lif')
img_0 = new.get_image(5)
pillowImage = img_0.get_frame(z=6, t=0, c=1)
pillowImage.show()

Thank you,
Dirk

Integer dims

Hello,
Thanks for sharing the code.
I was wondering if it would be better if the dims inside the info dictionary could a tuple of integers instead of strings.
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.