Giter Site home page Giter Site logo

dxchange's Introduction

DXchange

DXchange provides an interface with TomoPy and raw tomographic data collected at different synchrotron facilities including the Data Exchange file format (DXfile), currently in use at the Advanced Photon Source beamline 2-BM and 32-ID, at the Swiss Light Source Tomcat beamline and at the Elettra SYRMEP beamline.

Warning

DXchange will drop support for Python 2 before 1 January 2020. For more information, visit https://python3statement.org/.

Documentation

Features

  • Scientific Data Exchange file format.
  • Readers for tomographic data files collected at different facilities.
  • Writers for different file formats.

Highlights

  • Based on Hierarchical Data Format 5 (HDF5).
  • Focuses on technique rather than instrument descriptions.
  • Provenance tracking for understanding analysis steps and results.
  • Ease of readability.

Contribute

dxchange's People

Contributors

carterbox avatar cpchuang avatar data-exchange avatar decarlof avatar dgursoy avatar djvine avatar dmpelt avatar emmanuelle avatar gianthk avatar jeffkinnison avatar julreinhardt avatar kedokudo avatar lbluque avatar markrivers avatar michael-sutherland avatar mrakitin avatar prjemian avatar skielex avatar skylarjhdownes avatar tacaswell avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dxchange's Issues

have functions to read other properties from files

Similar to #30, at ALS8.3.2 we need to be able to read other properties from h5 files.
Originally I had an extra property that was read when calling the function read_als_832h5, but in #25 based on #20 I removed this to keep the API the same across other readers.
I am thinking that having a metadata reader module where functions from different facilities return a dictionary with metadata that is deemed important is a good way to go.

BUG: read_hdf5 does not recognize all hdf5 file extensions

Bug

.h5, .hdf5, and .he5 are all hierarchical data format 5 file extensions, but read_hdf5 only recognizes .h5.

When any file extensions that is not .h5 is used, the following error is printed to the console:

ERROR:dxchange.reader:Unknown file extension

However, the program continues to run and the reads the data from the files correctly.

Potential Fixes

  • Add cases to the file reader handler for .hdf5 and .he5 so that an error message is not printed.
  • Exit the program when unknown file extensions are provided or deescalate the error message to a warning which explains that it will assume files are .hdf5.

reader throws exception that the file could not be found, despite correct filepath

Hello. I'm a new user to dxchange/tomopy/github.

I found an issue that may not always happen, but can in certain circumstances. When I import a tiff using dxchange.reader.read_tiff, the logger throws an exception that the file doesn't exist. After trying 1000 different things (renaming file, moving to root, etc), nothing worked.

In the source for reader, you can see:

try: arr = tifffile.imread(fname, out='memmap') except IOError: logger.error('No such file or directory: %s', fname) return False

I tried tifffile.imread(fname), and it imported my tiff. I then tried tifffile.imread(fname,out='memmap') and it threw an os exception that said I did not have enough space. This was true. The tiff was larger than the remaining disk space on my computer. Clearing some space worked, however I would have never known this because the logger for dxchange was giving me a file missing error.

Is there a way to throw an error about disk space if that is the error that tifffile.imread(fname, out='memmap') gives?

Have append functionality in writer functions

@decarlof It would be a nice enhancement to have append functionality in writer functions such as write_hdf5, write_dxf, write_npy to allow writing out these formats when chunking datasets in limited memory machines.
If I find some time, I will look into implementing this in a future PR...

Alignment module not working

I am having trouble setting up the alignment module. (tomopy.prep.alignment)

This is my code
data= tomopy.prep.alignment.align_seq(data, theta, fdir='.', iters=10, pad=(0, 0), blur=True, save=False, debug=True)

The error I'm getting is:
AttributeError Traceback (most recent call last)
in ()
----> 1 data= tomopy.prep.alignment.align_seq(data, theta, fdir='.', iters=10, pad=(0, 0), blur=True, save=False, debug=True)

AttributeError: 'module' object has no attribute 'alignment'

I updated the tomopy version to the latest and it gave me the same error. Thoughts/ Suggestions?

Convert .txm to .tiff

Forgive me for the simple question, but can this program be used to convert TXM files from Zeiss/Xradia micro-CT scans to TIFF stacks?

I have practically no Python experience, but I thought it might be a relatively simple operation to read in TXM files and write out a TIFF stack. So I tried the following:

import tomopy
import dxchange

file_name = 'TLS575test.txm'

stack = dxchange.reader.read_txm(file_name)

dxchange.writer.write_tiff_stack(stack, 'test_tiff_output')

It appears to read the TXM file successfully, but then gives this error:
AttributeError: 'tuple' object has no attribute 'shape'

Am I even remotely on the right track, or way off?

Any tips would be greatly appreciated!

Error encountered while reading XRM file

Hi, Many thanks for this library. I managed to install it and read the example data file test_chip00.xrm and txrm_test_chip_tomo.txrm.
However, I encountered the following error while trying to read an XRM file, which was exported from Xradia software.
UnboundLocalError: local variable 'arr' referenced before assignment
Any idea on how to solve this issue?
An example XRM file can be found at this link.

ALS readers

It will be good to be able to load a subset of sinograms
See

 sino : {sequence, int}, 
    Specify sinograms to read. (start, end, step)

in other importers

write_txrm

Would it be possible to implement also a write_trxm method? So that a user can read a trxm file containing the data and metadata from a trxm file (already implemented with the read_trxrm method), then make some alterations and write a new txrm file with the same format as the original so it can be read by the Zeiss software.

Dependency management

I recently installed dxchange and saw, that it depends on the tifffile module. I saw nowhere, that there are any dependencies for dxchange on foreign modules and it doesn't seem to be handled automatically by the installer. Can this be changed?

Add read_ole_metadata to module import

Would it be possible to add the read_ole_metadata function to the all list in reader.py such that the metadata can be read without the entire dataset? This is useful for me as I want to use metadata to load the dataset with memmap

Bug in read_txrm

Hey,

I came across a bug in the read_txrm function related to slicing, as well as a type issue. I'll start with the bug. If you try to read out a portion of the images you get an exception due to a bug in the code.

Example:

reader.read_txm(file_path, slice_range=(slice(10, 11, None), slice(None), slice(None)))

Throws:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-4-7cc0a3c63f99> in <module>
----> 1 reader.read_txm(file_path, slice_range=(slice(10, 11, None), slice(None), slice(None)))

C:\Anaconda\envs\py37\lib\site-packages\dxchange\reader.py in read_txm(file_name, slice_range)
    357     """
    358 
--> 359     return read_txrm(file_name, slice_range)
    360 
    361 

C:\Anaconda\envs\py37\lib\site-packages\dxchange\reader.py in read_txrm(file_name, slice_range)
    322         img_string = "ImageData{}/Image{}".format(
    323             int(np.ceil((i + 1) / 100.0)), int(i + 1))
--> 324         array_of_images[i] = _read_ole_image(ole, img_string, metadata)[slice_range[1:]]
    325 
    326     reference = metadata['reference']

IndexError: index 10 is out of bounds for axis 0 with size 1

This issue is that the index i is 10 while the size of the array_of_images is just one in this case. Because of this it is not possible to use slices on the first axis, except if you start at 0 and step with 1.

There's also a smaller issue related to the data type. The array_of_images is always created using float32, so even though the _read_ole_image function correctly returns data as uint16 in my case, it is changed changed to float32 before being returned, which is actually very significant when working with +30GB data files.

exchange.read_aps13_bm value error

@decarlof Not what is going on here. When I merge dxchange master into my fork, read_aps_13bm now returns "ValueError: too many values to unpack (expected 4)". I replicated this issue with the example doc as well.

I decided to debug with the rec_aps_13bm.py example. I took the netcdf4 method from exchange and placed into rec_aps_13bm as a separate method. If I change 'slc = (proj, sino)' to slc = None' in the method, it passes the data just fine. If I change that in dxchange.exchange and pass to dxchange, I receive the same ValueError. Thoughts?


#!/usr/bin/env python

-- coding: utf-8 --

"""
TomoPy example script to reconstruct the APS 13-BM tomography
data as original netcdf files. To use, change fname to just
the file name (e.g. 'sample[2].nc' would be 'sample'.
Reconstructed dataset will be saved as float32 netcdf3.
"""
import glob
import numpy as np
import tomopy as tp
import dxchange as dx
import dxchange.reader as dxreader

from netCDF4 import Dataset

if name == 'main':
## Set path (without file suffix) to the micro-CT data to reconstruct.
fname = 'Foram_B2.nc' #example data of what I'm using, but does not work with any netcdf data currently.

'''
if slc in read_netcdf4 is set to None, data will returnself.
'''
def mf (fname):
    #creates list of filenames in directory sharing that same name scheme.
    files = glob.glob(fname[0:-5] + '*[1-3].nc')
    print(files)
    tomo = dxreader.read_netcdf4(files[1], 'array_data', slc=None)
    print('tomo', tomo.shape)
    flat1 = dxreader.read_netcdf4(files[0], 'array_data', slc=None)
    flat2 = dxreader.read_netcdf4(files[2], 'array_data', slc=None)
    flat = np.concatenate((flat1, flat2), axis = 0)
    del flat1, flat2
    print('flat', flat.shape)
    # mines the setup textfile for dark current.
    setup = glob.glob(fname[0:-5] + '*.setup')
    setup = open(setup[0], 'r')
    setup_data = setup.readlines()
    result = {}
    for line in setup_data:
        words = line[:-1].split(':',1)
        result[words[0].lower()] = words[1]
    
    print('result', result)
    dark = float(result['dark_current'])
    dark = flat*0+dark
    print('dark', dark.shape)
    theta = np.linspace(0.0, np.pi, tomo.shape[0])
    print('theta', theta.shape)
    return tomo, flat, dark, theta
## Import Data.

'''
Commented line is from example doc. This yields a ValueError even if you modify exchange to be as aboveself.
'''

proj, flat, dark, theta = dx.exchange.read_aps_13bm(fname, format = 'netcdf4')

'''
This works when calling above.
'''
proj, flat, dark, theta = mf(fname)

'''
Beneath here returns to example
'''
## Flat-field correction of raw data.
proj = tp.normalize(proj, flat = flat, dark = dark)

## Additional flat-field correction of raw data to negate need to mask.
proj = tp.normalize_bg(proj, air = 10)

## Set rotation center.
rot_center = tp.find_center_vo(proj)
print('Center of rotation: ', rot_center)

tp.minus_log(proj, out = proj)

# Reconstruct object using Gridrec algorith.
rec = tp.recon(proj, theta, center = rot_center, sinogram_order = False, algorithm = 'gridrec', filter_name = 'hann')
rec = tp.remove_nan(rec)

New tag for dxchange?

I'm in the process of writing a recipe for tomopy (cc @dgursoy @licode ) to add to conda-forge and need to package this as part of that process. Would it be possible to get a new tag on this project so I can point at something newer than October of 2014?

Thanks!

project importing dxchange fails to sphinx/autodoc

if you import dxchange in another project autodoc will fail to build with:

/astropy/utils/compat/numpycompat.py", line 18, in
NUMPY_LT_1_6_1 = not minversion('numpy', '1.6.1')

is this an astropy issue?

UnboundLocalError in _read_ole_data when reading txrm file

This bug is in the version of dxchange obtained through conda, I have not checked if the current version on github has the same issue.

When reading a TXRM file generated by an XRadia machine using dxchange.read_txrm, I get
UnboundLocalError: local variable 'arr' referenced before assignment

This is occurring because sometimes containers in the TXRM file can be empty. I solved the problem by modifying dxchange/reader.py to return None when the container is empty. On line 906 of reader.py;

if ole.exists(label):
        stream = ole.openstream(label)
        data = stream.read()
        arr = struct.unpack(struct_fmt, data)
        return arr
else:
        return None

pip-installable package

Dear developers,

I've tried to install the package, but noticed a few things which can be improved:

  • The requirements.txt file contains a few dependencies, which cannot be found at PyPI, i.e. the installation using pip install -r requirements.txt fails. The packages in question are: spefile, edffile and dxfile. Are there any plans to add those packages to the PyPI resource, if they are under your control? All the names are available at PyPI.
  • Automatic installation of the dependencies would help a lot, once the first point is resolved. An example can be found here (lines 7-8 and 21).
  • Eventually, it would be nice to have a dxchange package at PyPI. The name is available.

The changes above would benefit the installation of tomopy dependencies. Thank you!

Regards,
Maksim Rakitin

uint32 save_tiff_stack

Hey,

I am trying to use save_tiff_stack into a dtype='uint32' and the saved tiff's are corrupted (it is trying to save 64bit of information).
When i do a cast im = im.astype('uint8') or ('uint16') it works great.

ALS tiff reader fail data load

tested on https://www.globus.org/app/transfer?origin_id=5e6eeff0-00c4-11e6-a71b-22000bf2d559&origin_path=%2F

fname = '/local/data/templates/als_beamline_8.3.2/sample_name'

read_als_832(fname)

fails to load the data.

` # File definitions.
fname = os.path.abspath(fname)

if not normalized:
    fname = fname.split(
        'output')[0] + fname.split('/')[len(fname.split('/')) - 1]
    tomo_name = fname + '_0000_0000.tif'
    flat_name = fname + 'bak_0000.tif'
    dark_name = fname + 'drk_0000.tif'
    log_file = fname + '.sct'

`
If I remove:

fname = fname.split( 'output')[0] + fname.split('/')[len(fname.split('/')) - 1]

works. So one question is what is the naming convention for tiff files?

ALS hdf reader fail data load

proj, flat, dark = dxchange.read_als_832h5(fname)
fails on

20150820_124324_Dleucopodia_10458_pieceA_10x_z80mm_moreangles.h5

Traceback (most recent call last):
File "rec_als_hdf5.py", line 172, in
proj, flat, dark = dxchange.read_als_832h5(fname)
ValueError: too many values to unpack

open .dat file

My tomographic data are already treated and are in .dat files. The experiment was carried out by several sequences of scanning the sample with a microbeam and further rotate it from 0 to 360°. I could not find out any script to convert these data for tomopy reconstruction. Could you help me?

information about 'tooth.h5' file

Platform Information:

OS: Windows 10
Python Version : 3.9
TomoPy Version 1.11
I am interested in the process of making 'tooth.h5' file.
Is there a python code or other way to make tooth.h5 file?
I'd like to know how to make the "tooth.h5"file.

Thank you.

Reading TXM and TXRM

I have some TXM and TXRM files from Xradia. I was able to read TXRM file using reader.read_txrm but there is no TXM reader yet as far as I know. Is there a plan to make it any soon?

write_hdf5 presents a problem with 'maxshape'

I am facing a problem when trying to store a stack of images into an hdf5 using write_hdf5 function:

Traceback (most recent call last):
File "test.py", line 56, in
dxchange.write_hdf5(rec1)
File "/homelocal/sicilia/anaconda2/lib/python2.7/site-packages/dxchange/writer.py", line 254, in write_hdf5
appendaxis, maxshape)
UnboundLocalError: local variable 'maxshape' referenced before assignment

Interpreting txrm meta data.

This question may be too specific for txrm data. I am sorry if this question is not proper in this repository.

I want to reconstruct from txrm raw data using ASTRA toolbox. How can I know the distance between origin and detector and source? In the meta data, I got the following and I don't know how to get from this:

{'facility': None,
 'image_width': 1024,
 'image_height': 1024,
 'data_type': 5,
 'number_of_images': 1601,
 'pixel_size': 44.99769973754883,
 'reference_filename': b'\x00\x14...,
 'reference_data_type': 10,
 'thetas': array([-3.14149678, -3.13759498, -3.13380877, ...,  3.13378267,
         3.13758513,  3.1416089 ]),
 'x_positions': array([-1310.20007324,  -936.60003662, -1464.05004883, ..., -1247.09997559,
        -1401.        , -1036.80004883]),
 'y_positions': array([-31380.65039062, -31420.55078125, -31247.84960938, ...,
        -31515.75      , -31253.65039062, -31294.34960938]),
 'x-shifts': array([-0.05141851, -8.35447216,  3.36704612, ..., -1.39941931,
         2.02075362, -6.07303762]),
 'y-shifts': array([ 0.23718034, -0.64975119,  3.18803573, ..., -3.07316613,
         2.75161076,  1.84718478]),
 'reference': array([[ 960.06005859,  957.91125488,  959.35284424, ...,  868.76916504,...

Deprecation in _read_ole_image function

The np.fromstring() function at line 1155 in reader.py is causing the following error in a program I am writing:

DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead!

I can suppress/ignore it but was wondering if this is something that should be updated? I modified the function with np.frombuffer and it seems to work the same.

tifffile API change results in error when reading via dxchange.read_tiff

When using dxchange.read_tiff("img.tiff"), the following type error will be thrown:

File ~/opt/miniconda3/envs/imars3d-dev/lib/python3.10/site-packages/dxchange/reader.py:154, in read_tiff(fname, slc)
    152 fname = _check_read(fname)
    153 try:
--> 154     arr = tifffile.imread(fname, out='memmap')
    155 except IOError:
    156     logger.error('No such file or directory: %s', fname)

File ~/opt/miniconda3/envs/imars3d-dev/lib/python3.10/site-packages/tifffile/tifffile.py:952, in imread(files, aszarr, key, series, level, squeeze, maxworkers, name, offset, size, pattern, axesorder, categories, imread, sort, container, axestiled, ioworkers, chunkmode, fillvalue, zattrs, _multifile, _useframes, **kwargs)
    949 is_flags = parse_kwargs(kwargs, *(k for k in kwargs if k[:3] == 'is_'))
    951 if imread is None and kwargs:
--> 952     raise TypeError(
    953         'imread() got unexpected keyword arguments '
    954         + ', '.join(f"'{key}'" for key in kwargs)
    955     )
    957 if container is None:
    958     if isinstance(files, str) and ('*' in files or '?' in files):

TypeError: imread() got unexpected keyword arguments 'out'

It seems like tifffile.imread no long allows the argument out, which is causing this issue.

asarray() no longer accepts memmap=True as a kwarg

As of 2017-09-29 the asarray() method in tifffile no longer accepts memmap=True. It now uses out='memmap'.

arr = tifffile.imread(fname, memmap=True)

from reader.read_tiff() needs to be changed to:

arr = tifffile.imread(fname, out='memmap')

v0.1.7 fails to install with current version of tomopy

dxchange 0.1.7 is now available on cf-staging. However, when I try to install it into the environment created by tomopy it fails to install.

(base) C:\Data\Rivers\Rivers>conda create --name tomopy_test --channel conda-forge tomopy
Collecting package metadata (current_repodata.json): done
Solving environment: done


==> WARNING: A newer version of conda exists. <==
  current version: 4.9.2
  latest version: 4.10.3

Please update conda by running

    $ conda update -n base -c defaults conda



## Package Plan ##

  environment location: C:\Users\rivers\Anaconda3\envs\tomopy_test

  added / updated specs:
    - tomopy


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    scikit-image-0.18.3        |   py39h2e25243_1        10.7 MB  conda-forge
    yaml-0.2.5                 |       he774522_0          61 KB  conda-forge
    ------------------------------------------------------------
                                           Total:        10.8 MB

The following NEW packages will be INSTALLED:

  appdirs            conda-forge/noarch::appdirs-1.4.4-pyh9f0ad1d_0
  blosc              conda-forge/win-64::blosc-1.21.0-h0e60522_0
  brotli             conda-forge/win-64::brotli-1.0.9-h8ffe710_6
  brotli-bin         conda-forge/win-64::brotli-bin-1.0.9-h8ffe710_6
  brotlipy           conda-forge/win-64::brotlipy-0.7.0-py39hb82d6ee_1003
  bzip2              conda-forge/win-64::bzip2-1.0.8-h8ffe710_4
  c-blosc2           conda-forge/win-64::c-blosc2-2.0.4-h09319c2_1
  ca-certificates    conda-forge/win-64::ca-certificates-2021.10.8-h5b45459_0
  certifi            conda-forge/win-64::certifi-2021.10.8-py39hcbf5309_1
  cffi               conda-forge/win-64::cffi-1.15.0-py39h0878f49_0
  cfitsio            conda-forge/win-64::cfitsio-4.0.0-hd67004f_0
  charls             conda-forge/win-64::charls-2.2.0-h39d44d4_0
  charset-normalizer conda-forge/noarch::charset-normalizer-2.0.8-pyhd8ed1ab_0
  cloudpickle        conda-forge/noarch::cloudpickle-2.0.0-pyhd8ed1ab_0
  cryptography       conda-forge/win-64::cryptography-36.0.0-py39h7bc7c5c_0
  cycler             conda-forge/noarch::cycler-0.11.0-pyhd8ed1ab_0
  cytoolz            conda-forge/win-64::cytoolz-0.11.2-py39hb82d6ee_1
  dask-core          conda-forge/noarch::dask-core-2021.11.2-pyhd8ed1ab_0
  fonttools          conda-forge/win-64::fonttools-4.28.2-py39hb82d6ee_0
  freetype           conda-forge/win-64::freetype-2.10.4-h546665d_1
  fsspec             conda-forge/noarch::fsspec-2021.11.1-pyhd8ed1ab_0
  giflib             conda-forge/win-64::giflib-5.2.1-h8d14728_2
  idna               conda-forge/noarch::idna-3.1-pyhd3deb0d_0
  imagecodecs        conda-forge/win-64::imagecodecs-2021.11.20-py39he391c9c_1
  imageio            conda-forge/noarch::imageio-2.9.0-py_0
  intel-openmp       conda-forge/win-64::intel-openmp-2021.4.0-h57928b3_3556
  jbig               conda-forge/win-64::jbig-2.1-h8d14728_2003
  jpeg               conda-forge/win-64::jpeg-9d-h8ffe710_0
  jxrlib             conda-forge/win-64::jxrlib-1.1-h8ffe710_2
  kiwisolver         conda-forge/win-64::kiwisolver-1.3.2-py39h2e07f2f_1
  krb5               conda-forge/win-64::krb5-1.19.2-h20d022d_3
  lcms2              conda-forge/win-64::lcms2-2.12-h2a16943_0
  lerc               conda-forge/win-64::lerc-3.0-h0e60522_0
  libaec             conda-forge/win-64::libaec-1.0.6-h39d44d4_0
  libblas            conda-forge/win-64::libblas-3.9.0-12_win64_mkl
  libbrotlicommon    conda-forge/win-64::libbrotlicommon-1.0.9-h8ffe710_6
  libbrotlidec       conda-forge/win-64::libbrotlidec-1.0.9-h8ffe710_6
  libbrotlienc       conda-forge/win-64::libbrotlienc-1.0.9-h8ffe710_6
  libcblas           conda-forge/win-64::libcblas-3.9.0-12_win64_mkl
  libcurl            conda-forge/win-64::libcurl-7.80.0-h789b8ee_0
  libdeflate         conda-forge/win-64::libdeflate-1.8-h8ffe710_0
  liblapack          conda-forge/win-64::liblapack-3.9.0-12_win64_mkl
  libpng             conda-forge/win-64::libpng-1.6.37-h1d00b33_2
  libssh2            conda-forge/win-64::libssh2-1.10.0-h680486a_2
  libtiff            conda-forge/win-64::libtiff-4.3.0-hd413186_2
  libwebp-base       conda-forge/win-64::libwebp-base-1.2.1-h8ffe710_0
  libzlib            conda-forge/win-64::libzlib-1.2.11-h8ffe710_1013
  libzopfli          conda-forge/win-64::libzopfli-1.0.3-h0e60522_0
  locket             conda-forge/noarch::locket-0.2.0-py_2
  lz4-c              conda-forge/win-64::lz4-c-1.9.3-h8ffe710_1
  m2w64-gcc-libgfor~ conda-forge/win-64::m2w64-gcc-libgfortran-5.3.0-6
  m2w64-gcc-libs     conda-forge/win-64::m2w64-gcc-libs-5.3.0-7
  m2w64-gcc-libs-co~ conda-forge/win-64::m2w64-gcc-libs-core-5.3.0-7
  m2w64-gmp          conda-forge/win-64::m2w64-gmp-6.1.0-2
  m2w64-libwinpthre~ conda-forge/win-64::m2w64-libwinpthread-git-5.0.0.4634.697f757-2
  matplotlib-base    conda-forge/win-64::matplotlib-base-3.5.0-py39h581301d_0
  mkl                conda-forge/win-64::mkl-2021.4.0-h0e2418a_729
  mkl_fft            conda-forge/win-64::mkl_fft-1.3.1-py39h0cb33c3_1
  msys2-conda-epoch  conda-forge/win-64::msys2-conda-epoch-20160418-1
  munkres            conda-forge/noarch::munkres-1.1.4-pyh9f0ad1d_0
  networkx           conda-forge/noarch::networkx-2.6.3-pyhd8ed1ab_1
  numexpr            conda-forge/win-64::numexpr-2.7.3-py39h2e25243_2
  numpy              conda-forge/win-64::numpy-1.21.4-py39h6635163_0
  olefile            conda-forge/noarch::olefile-0.46-pyh9f0ad1d_1
  openjpeg           conda-forge/win-64::openjpeg-2.4.0-hb211442_1
  openssl            conda-forge/win-64::openssl-1.1.1l-h8ffe710_0
  packaging          conda-forge/noarch::packaging-21.3-pyhd8ed1ab_0
  pandas             conda-forge/win-64::pandas-1.3.4-py39h2e25243_1
  partd              conda-forge/noarch::partd-1.2.0-pyhd8ed1ab_0
  pillow             conda-forge/win-64::pillow-8.4.0-py39h916092e_0
  pip                conda-forge/noarch::pip-21.3.1-pyhd8ed1ab_0
  pooch              conda-forge/noarch::pooch-1.5.2-pyhd8ed1ab_0
  pycparser          conda-forge/noarch::pycparser-2.21-pyhd8ed1ab_0
  pyopenssl          conda-forge/noarch::pyopenssl-21.0.0-pyhd8ed1ab_0
  pyparsing          conda-forge/noarch::pyparsing-3.0.6-pyhd8ed1ab_0
  pysocks            conda-forge/win-64::pysocks-1.7.1-py39hcbf5309_4
  python             conda-forge/win-64::python-3.9.7-h7840368_3_cpython
  python-dateutil    conda-forge/noarch::python-dateutil-2.8.2-pyhd8ed1ab_0
  python_abi         conda-forge/win-64::python_abi-3.9-2_cp39
  pytz               conda-forge/noarch::pytz-2021.3-pyhd8ed1ab_0
  pywavelets         conda-forge/win-64::pywavelets-1.2.0-py39h5d4886f_1
  pyyaml             conda-forge/win-64::pyyaml-6.0-py39hb82d6ee_3
  requests           conda-forge/noarch::requests-2.26.0-pyhd8ed1ab_1
  scikit-image       conda-forge/win-64::scikit-image-0.18.3-py39h2e25243_1
  scipy              conda-forge/win-64::scipy-1.7.3-py39hc0c34ad_0
  setuptools         conda-forge/win-64::setuptools-59.4.0-py39hcbf5309_0
  six                conda-forge/noarch::six-1.16.0-pyh6c4a22f_0
  snappy             conda-forge/win-64::snappy-1.1.8-ha925a31_3
  sqlite             conda-forge/win-64::sqlite-3.37.0-h8ffe710_0
  tbb                conda-forge/win-64::tbb-2021.4.0-h2d74725_1
  tifffile           conda-forge/noarch::tifffile-2021.11.2-pyhd8ed1ab_0
  tk                 conda-forge/win-64::tk-8.6.11-h8ffe710_1
  tomopy             conda-forge/win-64::tomopy-1.11.0-py39h17543ac_100
  toolz              conda-forge/noarch::toolz-0.11.2-pyhd8ed1ab_0
  tzdata             conda-forge/noarch::tzdata-2021e-he74cb21_0
  ucrt               conda-forge/win-64::ucrt-10.0.20348.0-h57928b3_0
  urllib3            conda-forge/noarch::urllib3-1.26.7-pyhd8ed1ab_0
  vc                 conda-forge/win-64::vc-14.2-hb210afc_5
  vs2015_runtime     conda-forge/win-64::vs2015_runtime-14.29.30037-h902a5da_5
  wheel              conda-forge/noarch::wheel-0.37.0-pyhd8ed1ab_1
  win_inet_pton      conda-forge/win-64::win_inet_pton-1.1.0-py39hcbf5309_3
  xz                 conda-forge/win-64::xz-5.2.5-h62dcd97_1
  yaml               conda-forge/win-64::yaml-0.2.5-he774522_0
  zfp                conda-forge/win-64::zfp-0.5.5-h0e60522_8
  zlib               conda-forge/win-64::zlib-1.2.11-h8ffe710_1013
  zstd               conda-forge/win-64::zstd-1.5.0-h6255e5f_0


Proceed ([y]/n)? y


Downloading and Extracting Packages
yaml-0.2.5           | 61 KB     | ############################################################################ | 100%
scikit-image-0.18.3  | 10.7 MB   | ############################################################################ | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate tomopy_test
#
# To deactivate an active environment, use
#
#     $ conda deactivate

(base) C:\Data\Rivers\Rivers>conda activate tomopy_test


(tomopy_test) C:\Data\Rivers\Rivers>conda search -c cf-staging dxchange
Loading channels: done
# Name                       Version           Build  Channel
dxchange                       0.1.7    pyhd8ed1ab_0  cf-staging

(tomopy_test) C:\Data\Rivers\Rivers>conda install -c cf-staging dxchange=0.1.7
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.
failed

UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

[discussion] hdf file browser

Hello! I stumbled across this package https://github.com/m-rossi/hdf5widget and it works very well to traverse hdf5 files in jupyter notebooks. May be of interest to you all - not to add to dxchange, but just FYI.

Working on a way to access hdf data interactively and plot it.

The read_hdf_meta function works great for showing the hdf contents via text, but I'm wondering if you have anything interactive. If any of you have a solution for this already, shoot it my way.

Here is a preview - goal here is to be able to click on a dataset and have it plot in the viewer to the right.
image

This is also a nice package to view and visualize data, but they do not have a python API for it so you cannot access the data you select (as far as I am aware): https://github.com/silx-kit/jupyterlab-h5web.

Version incompatible with the latest numpy

import dxchange raises "AttributeError: module 'numpy' has no attribute 'typeDict'".
As np.typeDict is deprecated since after numpy=1.21, the error occurs if numpy is up to date, or users can also keep the old version of numpy, but it may cause problems with other packages that adapt the latest numpy, which is very convenient to use.
Could you please fix this? Many thanks!

Standardize output API for the different readers

Some functions now read properties from the file, returning them as the fourth output variable. Others however, only return three variables (tomo, flat, and dark), or fewer. Should we try to standardize this output API a bit to make it easier to write code/scripts that supports all readers?

One way would be to always return four variables: tomo, flat, dark, and a Python dictionary with all metadata that can be read (or is requested through input arguments). This Python dictionary could have a public 'API' as well, for example always having the projection angles accessible with the 'angles' key. To check whether angles are actually available, a user could then do something like if 'angles' in metadata. Another option is to always return only one variable: a Python dictionary, with the 'tomo' etc keys representing the tomo etc arrays, but that might be a bit too generic.

These changes will all be very easy to implement, but we would first have to decide on what we want.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.