lumispy / lumispy Goto Github PK
View Code? Open in Web Editor NEWLuminescence data analysis with HyperSpy.
Home Page: https://lumispy.org
License: GNU General Public License v3.0
Luminescence data analysis with HyperSpy.
Home Page: https://lumispy.org
License: GNU General Public License v3.0
Go through the documentation and replace absolute links to the hyperspy documentation by sphinx references of the type :external+hyperspy:ref:
Should be done once the new labels from hyperspy/hyperspy#3085 and hyperspy/hyperspy#3086 are available.
Hi,
It seems some backwards incompatible changes were made between version 0.1.0 and 0.1.2.
My code for 0.1.0 made use of
lumispy.signals.luminescence_spectrum.LumiSpectrum.background_subtraction
in 0.1.2 this is now broken as this method has been removed. It seems this functionality has now been moved to:
lumispy.signals.luminescence_spectrum.LumiSpectrum.remove_background_from_file
However, it seems the implementation of this method is different to the background_subtraction
method from 0.1.0 as swapping the two doesn't seem to fix the issue.
In general, my expectations for versioning are that a bump in patch number i.e. 0.1.0 to 0.1.2 would only be to incorporate backwards compatible bug fixes, as described in the semantic versioning docs:
In general my expectations for removing features like lumispy.signals.luminescence_spectrum.LumiSpectrum.background_subtraction
would be that moving this would require a major bump in version number, and ideally a deprecation warning added during a minor bump as a warning.
That all said, I am very pleased at the decision to change the name from a noun to a verb!
Thanks for your work on this package, it's super cool, hope to add some features at some point ๐.
Hugh
An issue to discuss openly class hierarchy for lumispy, from an e-mail with @jlaehne
We want to start out with CL, but need a more general parent that allows to add PL and EL at some point. Also you probably want to add TR-CL/PL at some point.
In general, I guess something like
-lumispec (inheriting from hyperspy/signal1D)
|-cl
|-cl-sem
|-cl-stem
|-pl
|-el
-trlumi (inheriting from hyperspy/signal2D ?)
|-trcl
|-trpl
Hi all,
I was recently discussing on setting up a Zenodo DOI for any publication that may come from the code we are actively writing.
I have seen that both hyperspy and pyxem use such a system which keeps track of the versions for referencing.
Before I set up a Zenodo account for this repo, I wanted to discuss with you @jlaehne @ericpre if you have any objections/suggestions before going ahead.
Since we don't have any version released yet (which we should focus on soon, I think), will that be an issue with getting a DOI?
Thank you
A method called remove_background_signal
where a Signal1D object is subtracted from another Signal1D object.
If the axes don't match (in size, offset and scale for UniformDataAxis), the background signal should be rebinned/interpolated to the main signal axis.
When trying to fix some warning bugs, discussion made us realise of the need to rewrite the depreciated remove_background_from_file
for a more HyperSpy like method, as discussed here #114 .
Ideally it should also support non-uniform data axes with interpolation.
Maybe add support a for fitting a ScalableFixedPattern as @ericpre proposed.
#44 introduced the following deprecation warning, which we should fix:
lumispy/tests/test_luminescence_spectrum.py::TestLumiSpectrum::test_errors_raise
/home/jonas/medien/pdi/git/lumispy/lumispy/signals/luminescence_spectrum.py:312: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
background_xy = np.array(background)
@jordiferrero You put the attolight reader under utils/io_utils
. A few points concerning that
io_plugins
io_utils
is a very generic name for a function that is specific to one brand of instrument. As we want to develop the package in a way that can encompass different manufacturers and on top of CL also PL and at some point maybe Raman, we should have separate io files for different instrumentsload
function similar to hyperspy that is generic and automatically chooses the right io_plugin (this should include the plugins provided by HyperSpy, which will be moved to a separate IO-package at some point).In this file of the API documentation, for some functions the first line of the Docstring is highlighted (which it should not be) as if there is an indentation error. Anyone have an idea? @ericpre @hakonanes you have more experience with that.
https://lumispy.readthedocs.io/en/latest/api/lumispy.signals.luminescence_spectrum.html
Examples are px_to_nm_grating_solver
, to_array
, to_invcm
, to_invcm_relative
We missed something when introducing the transient signal classes last year (and no one seems to have used them so far? Anyway there are no specific functions). At the moment, we have a set of Signal2D classes for transients:
LumiTransient | 2 | Luminescence | real | TRLumi, TR luminescence, time-resolved luminescence
CLTransient | 2 | TRCL | real | TR cathodoluminescence, time-resolved cathodoluminescence
PLTransient | 2 | TRPL | real | TR photoluminescence, time-resolved photoluminescence
That is correct for a streak camera image <x,y|E,t>, but not for a pure transient <x,y|t> (without spectral information).
The ...Transient
classes should in fact be Signal1D, while there should be an additional set of classes ...TransientSpectrum
for Streak camera images. Any oppinions? I would propose to fix it for the v0.2 release.
(see original discussion in #5 (comment) )
As soon as HyperSpy v1.7 is released, we should release v0.2 to be included in the bundle. The following PRs should be merged until then:
Error occurs when slicing a signal with isig[x:y]
that was converted using the lumispy functions to_eV()
or to_invcm()
.
Steps to reproduce the behavior:
import hyperspy.api as hs
S = hs.signals.Signal1D(np.arange(100),axes=[{'axis': np.arange(100)+300}])
S.set_signal_type("CL")
S.to_eV(inplace=True)
S.isig[3.251:4.052]
Leads to TypingError
:
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function diff at 0x7fad54e2c310>) found for signature:
>>> diff(array(float64, 1d, A))
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload in function 'np_diff_impl': File: numba/np/arraymath.py: Line 3684.
With argument(s): '(array(float64, 1d, A))':
Rejected as the implementation raised a specific error:
TypingError: Failed in nopython mode pipeline (step: nopython frontend)
- Resolution failure for literal arguments:
reshape() supports contiguous array only
- Resolution failure for non-literal arguments:
reshape() supports contiguous array only
During: resolving callee type: BoundFunction(array.reshape for array(float64, 1d, A))
During: typing of call at /usr/lib/python3.10/site-packages/numba/np/arraymath.py (3702)
File "../../../../usr/lib/python3.10/site-packages/numba/np/arraymath.py", line 3702:
def diff_impl(a, n=1):
<source elided>
# To make things easier, normalize input and output into 2d arrays
a2 = a.reshape((-1, size))
^
raised from /usr/lib/python3.10/site-packages/numba/core/typeinfer.py:1086
During: resolving callee type: Function(<function diff at 0x7fad54e2c310>)
During: typing of call at /home/jonas/medien/pdi/git/hyperspy/hyperspy/misc/array_tools.py (428)
File "../../medien/pdi/git/hyperspy/hyperspy/misc/array_tools.py", line 428:
def numba_closest_index_round(axis_array, value_array):
<source elided>
rtol = 1e-10
machineepsilon = np.min(np.abs(np.diff(axis_array))) * rtol
^
If the energy is set up manually it works
import hyperspy.api as hs
import lumispy as lum
S1 = hs.signals.Signal1D(np.arange(100),axes=[{'axis': lum.nm2eV((np.arange(100)+300)[::-1])}])
S.set_signal_type("CL")
S.isig[3.251:4.052]
Fix will be submitted, forcing the dtype in axis2eV
and axis2invcm
.
Similar to the centroid
function from #168 we should create further functions that give back information about a peak without fitting
Maximum of a peak
HyperSpy provides the valuemax
function to find the position of a maximum of a signal. With smoothing and signal ranges that can work also on noisier data or data with multiple peaks, respectively. However, an alternative approach that I would like to see implemented is taking the first derivative and then finding zero-crossings. In particular this method is useful to identify the positions of secondary maxima.
EDIT: This functionality is actually provided by find_peaks1D_ohaver -- its been a while since I had looked at that function
Width of a peak
From fits, you can get the FWHM of a peak, but that is not very helpful for asymmetric peak shapes. I therefore envisage a function that calculates the distance between the half maximum points of a peak along the signal axis.
EDIT: This is actually provided by estimate_peak_width
I have recently tried to think of what parameters we should include in the s.metadata
and what parameters stay as .original_metadata
(and #51 made me start this issue).
I normally load .sur
files using the hyperspy io plugin. That reads the orignal_metadata (more than 50 parameters) and then I parse some relevant values to the metadata to be used in some functions (e.g. calibration, normalisation...).
After some discussion with @LMSC-NTappy, we realised that the variance model stored in the LumiSpectrum.metadata.Signal.Noise_properties.variance
should no longer be homoscedastic.
Instead, the noise variance should also be converted to eV, but with the intensities multiplied by the square of the jacobian (according to Var(aX) = a**2*Var(X)
).
I am not sure what the best way to implement this subtlety is in our luminescence object. I understand that, once you have your signal, then you can compute the variance manually and then apply the Jacobian transformation. But is there a way to define this noise in a generic way (to the class rather than to the data), so when the cl.to_eV()
function is called, such transformation is also applied to the variance metadata (should there be any).
This is the Hyperspy guideline on noise metadata.
Let's use this chat to discuss further @LMSC-NTappy. Any thoughts?
We have some low hanging fruit in increasing the test coverage of LumiSpy:
px_to_nm_grating_solver
in signals/luminescence_spectrum.py
is not covered [See codecov, lines 291-310]
For to_eV
and to_invcm
in signals/luminescence_spectrum.py
, the case where the variance is a single int/float
is not covered when inplace=True
. See codecov, lines 154-161 & 309-316
scale_by_exposure
in signals/common_luminescence.py
is missing coverage for the situation where ("Signal.quantity") == "Intensity (Counts)"
See codecov, lines 154-155
remove_spikes
in signals/cl_spectrum.py
is missing coverage for show_diagnosis_histogram
See codecov, line 92
solve_grating_equation
in utils/axes.py
is missing coverage for axis being non-uniform DataAxis
See codecov, line 485
remove_background_from_file
misses some tests for warnings/errors, but as it will be deprecated is not worth implementing.
Indeed, the failure are related to changes in the map
method:
ValueError: The size of the navigation_shape for the kwarg shift (<(100,)> must be consistent with the size of the mapped signal <(10, 10)>
This point out to an argument which doesn't have the right shape. It is fairly likely that ambiguous usage of map
such as this one are not allowed anymore.
Originally posted by @ericpre in #68 (comment)
Hi everyone,
first of all thanks for the lumispy package.
I'm working with CL data, and most of the time I'm playing with interactive ROIs or new hyperspy
functions like the hyperspy.api.plot.plot_roi_map()
but non-uniform axes are not supported yet.
Would it be reasonable to expand the to_eV()
function to include a conversion option like in the following code?
from lumispy.signals import CLSEMSpectrum
n_points = 1024
wl = np.linspace(300,700,n_points)
sigma = 25
wl_center = 500
gaussian = 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (wl - wl_center)**2 / (2 * sigma**2))
noise = CLSEMSpectrum(np.random.random((10,10,n_points))/1000)
s = CLSEMSpectrum(np.broadcast_to(gaussian, (10,10,n_points))) + noise
s.axes_manager.signal_axes[0].name = 'Wavelength'
s.axes_manager.signal_axes[0].units = 'nm'
s.axes_manager.signal_axes[0].scale = (700-300)/n_points
s.axes_manager.signal_axes[0].offset = 300
s.axes_manager.navigation_axes[0].name = 'X'
s.axes_manager.navigation_axes[0].units = 'nm'
s.axes_manager.navigation_axes[0].scale = 0.1
s.axes_manager.navigation_axes[0].offset = 100
s.axes_manager.navigation_axes[1].name = 'Y'
s.axes_manager.navigation_axes[1].units = 'nm'
s.axes_manager.navigation_axes[1].scale = 0.1
s.axes_manager.navigation_axes[1].offset = 100
s_ev = s.to_eV(inplace=False)
print(type(s_ev.axes_manager[-1]))
import hyperspy.axes as hsax
old_axis = s_ev.axes_manager[-1].axis
new_axis = hsax.DataAxis(axis=old_axis, name='Energy', units='eV')
new_axis.convert_to_uniform_axis()
print(type(new_axis))
test = s_ev.interpolate_on_axis(new_axis, axis='Energy', inplace=False)
test.plot()
hs.plot.plot_spectra([s_ev.inav[5,5], test.inav[5,5]], legend=['Original data', 'Data after uniform + interpolation'])
fig, ax = plt.subplots()
ax.scatter(old_axis, np.ones(len(old_axis)), label='Non uniform original axis')
ax.scatter(new_axis.axis, 2*np.ones(len(old_axis)), label='Uniform new axis')
fig.legend()
thanks a lot
The coveralls badge is claiming 0% coverage, though when you check out the details we are at >90%. Probably something in the coveralls settings?
Or use codecov instead of coveralls, like HyperSpy?
The grating_correction
function fails to work when using it on non-square navigation arrays.
The function needs to be generalised to any navigation shape.
TODO:
Find out the exact error risen in the terminal.
crop_edges
is a wrapper function for inav
that takes the parameter crop_px
and removes as many pixels from all sides of a map.
To make this function more versatile, it would make sense to alternatively allow the following sets of parameters:
crop_x_px
and crop_y_px
to crop different amounts of pixels from the x and y directionscrop_left_px
, crop_right_px
, crop_top_px
, crop_bottom_px
to crop different amounts of pixels from all sides (check in matplotlib
what order would be typically used for such a set of parameters corresponding to the edges).@jordiferrero , you originally implemented this function. Would this extension make sense to you? We have a student that could implement it.
It would be useful to add a quick function that calculates some parameters to estimate the homogeneity in emission in a given area.
What I did in the past was to take the fitted model from a SI and use mean and std dev of the gaussian intensity and position. Also, I had a threshold on the relative intensity of a peak so that one could have an estimate of how many of the pixels are non-emissive.
I'm currently writing that into a function, let me know if anyone has any thoughts on:
When slicing, summing, integrating signals of type TransientSpec
, the resulting signal will be a Signal1D
. It would be desirable to obtain signals of type Transient
or Luminescence
depending on what axis the function operates on.
As pointed out in #140 (comment), there is quite some duplicate code in the if/else statements of the axis conversion codes that can be consolidated.
It would be desirable to allow any cross-conversions between wavelength, wavenumber and energy (documenting in the metadata the conversions to warn if multiple conversions are done on the same axis). To this end, it may be considered to move the main part of the code into a generalized conversion function.
Following @ericpre #25 PR on fixing the signal registration for the lumispy classes, it is not working fully yet.
The tests in test_signal_registration
are passing well, but if I run:
s = hs.load(path, signal_type='CL_SEM',)
The following warning raises:
WARNING:hyperspy.io: signal_type='CL_SEM' not understood. See hs.print_known_signal_types() for a list of known signal types, and the developer guide for details on how to add new signal_types.
And a Signal2D
class is loaded.
Moreover, if I run hs.print_known_signal_types()
all I get are the native Hyperspy classes.
The failure on azure pipeline are temporary, because the dask-core
package is broken on anaconda defaults. The package on conda-forge is fine but CI is setup with the anaconda defaults channels as higher priority than conda-forge. This should be temporary until AnacondaRecipes/dask-core-feedstock#2 is merged and the packages available on the server.
My code for energy conversion relies on the non-linear DataAxis
that is not part of the HyperSpy release yet, but might hopefully become a pre-release soon. Should I still contribute it here already for you to use with the right hyperspy development branch?
It contains some additional features like a wavelength dependent refractive index of air based on an analytical function (therefore it can not be a FunctionalDataAxis
)
We want to provide readers for different luminescence spectroscopy file formats. In particular,
.xml
) - see hyperspy/rosettasciio#25
.tvf
) - see hyperspy/rosettasciio#27.wdf
) - see hyperspy/rosettasciio#55.spc
)Is it already possible to register file reader extensions with HyperSpy @ericpre and @francisco-dlp? As these are not electron microscopy related file formats, I would propose to have them within LumiSpy - but it would be great to register them so that they work with hs.load
similar to how we extend HyperSpy by additional signal types.
@pietsjoh would work on the implementation of the readers
What's about making a v0.1.1 release? A new release would fix the integration test suite which is failing with lumispy 0.1.0. This will also save me the reception of an email notification every day to tell me that the integration test suite is failing!
In original code, the background was stored as a class attribute (cl.background
).
In newer versions of code, the background is stored in the metadata (cl.metadata.Signal.background
).
Some background related functions need to be adapted to this new version of code.
With the addition of .sur
file reading in Hyperspy 1.6, the loading functions for the HYPMap.bin
are no longer necessary.
I will sort this out soon, cleaning up the io-plugins
section.
From https://github.com/hyperspy/hyperspy-extensions-list/actions/runs/3119041322/jobs/5058746331
=================================== FAILURES ===================================
______________________ TestLumiSpectrum.test_errors_raise ______________________
a = [array([ 0. , 5.44444444, 10.88888889, 16.33333333, 21.77777778,
[27](https://github.com/hyperspy/hyperspy-extensions-list/actions/runs/3119041322/jobs/5058746331#step:16:28).22222222, [32](https://github.com/hyperspy/hyperspy-extensions-list/actions/runs/3119041322/jobs/5058746331#step:16:33).66666667, 38.11111111,...., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])]
@array_function_dispatch(_shape_dispatcher)
def shape(a):
"""
Return the shape of an array.
Parameters
----------
a : array_like
Input array.
Returns
-------
shape : tuple of ints
The elements of the shape tuple give the lengths of the
corresponding array dimensions.
See Also
--------
len : ``len(a)`` is equivalent to ``np.shape(a)[0]`` for N-D arrays with
``N>=1``.
ndarray.shape : Equivalent array method.
Examples
--------
>>> np.shape(np.eye(3))
(3, 3)
>>> np.shape([[1, 3]])
(1, 2)
>>> np.shape([0])
(1,)
>>> np.shape(0)
()
>>> a = np.array([(1, 2), (3, 4), (5, 6)],
... dtype=[('x', 'i4'), ('y', 'i4')])
>>> np.shape(a)
(3,)
>>> a.shape
(3,)
"""
try:
> result = a.shape
E AttributeError: 'list' object has no attribute 'shape'
/usr/share/miniconda3/lib/python3.10/site-packages/numpy/core/fromnumeric.py:20[33](https://github.com/hyperspy/hyperspy-extensions-list/actions/runs/3119041322/jobs/5058746331#step:16:34): AttributeError
During handling of the above exception, another exception occurred:
self = <lumispy.tests.signals.test_luminescence_spectrum.TestLumiSpectrum object at 0x7f5d5c5a2500>
def test_errors_raise(self):
s = LumiSpectrum(np.ones(50))
for bkg, error in error_backgrounds:
with pytest.raises(error):
> s.remove_background_from_file(bkg)
/usr/share/miniconda3/lib/python3.10/site-packages/lumispy/tests/signals/test_luminescence_spectrum.py:60:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/share/miniconda3/lib/python3.10/site-packages/lumispy/signals/luminescence_spectrum.py:553: in remove_background_from_file
if np.shape(background_xy)[0] == 1:
<__array_function__ internals>:200: in shape
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = [array([ 0. , 5.44444444, 10.88888889, 16.33333333, 21.77777778,
27.22222222, 32.66666667, 38.11111111,...., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])]
@array_function_dispatch(_shape_dispatcher)
def shape(a):
"""
Return the shape of an array.
Parameters
----------
a : array_like
Input array.
Returns
-------
shape : tuple of ints
The elements of the shape tuple give the lengths of the
corresponding array dimensions.
See Also
--------
len : ``len(a)`` is equivalent to ``np.shape(a)[0]`` for N-D arrays with
``N>=1``.
ndarray.shape : Equivalent array method.
Examples
--------
>>> np.shape(np.eye(3))
(3, 3)
>>> np.shape([[1, 3]])
(1, 2)
>>> np.shape([0])
(1,)
>>> np.shape(0)
()
>>> a = np.array([(1, 2), (3, 4), (5, 6)],
... dtype=[('x', 'i4'), ('y', 'i4')])
>>> np.shape(a)
(3,)
>>> a.shape
(3,)
"""
try:
result = a.shape
except AttributeError:
> result = asarray(a).shape
E ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
/usr/share/miniconda3/lib/python3.10/site-packages/numpy/core/fromnumeric.py:20[35](https://github.com/hyperspy/hyperspy-extensions-list/actions/runs/3119041322/jobs/5058746331#step:16:36): ValueError
It would be good to set up continuous integration for this repository.
It is well documented online but as a starting point it may be worth having a look:
https://github.com/hyperspy/ci-scripts
Azure tests pipeline is failing, as discussed in #168. The error happens when it tries to import:
from numba.np.ufunc import _internal
SystemError: initialization of _internal failed without raising an exception
##[error]Bash exited with code '1'.
See here for example.
#### Additional context
It seems it could be a problem caused by numpy 1.24 ([see here](https://github.com/numba/numba/issues/8615)).
#### What I already tried
I already tried and did not work:
- Set numpy < 1.24 during the installation (in requirement in `setup.py`)
- Remove all dependencies but only hyperspy 1.7
The read the docs equations are broken.
It does not show the "parsed" images of the maths equation.
The change that caused it is #163 by @jlaehne
https://docs.lumispy.org/en/latest/user_guide/signal_axis.html
I tried to figure out what changed, but I am not sure... can you have quick look?
When an Attolight file is loaded, the signal x-axis is not calibrated properly.
There are two ways to go ahead with this issue:
*With this second approach, the Attolight system seems to have a bug, whereby if you are binning, the x-axis saved in the background file in a HYP map is not correct. However, the manually saved background file has the corrected x-axis values.
This figure illustrates that:
Here, we show the difference between the correctly saved x-axis, the bugged incorrect x-axis from the HYP map background and the current estimated x-axis (without the spectrometer equation solved yet).
To do:
load_hypmap()
function to account for these small errors.background_substraction()
function to be able to take different x-axis to substract.Hi all,
I have wanted to do this for very long...
I have now added support for the read the docs. This means that we can have explanations on the functions we write/methodologies in a nice formatted way.
As far as I am aware, such guides can be added to user_guide/file.rst
. Feel free to add content there.
I have a list of pages I would like to add for the analysis of PL data (coming soon).
@jlaehne Maybe you could take care on the Jacobian transform documentation if you feel like explaining in a bit more detail how LumiSpy does this transform. That'd be a great opportunity to test it out too.
I believe having this will help external users to get started, as a nicely explained guide (like the EDX analysis page in HyperSpy docs) can really become "the place to check" on how to do data analysis of a particular data signal. I think the demo notebooks are great, but they can also get a bit messy.
Happy to hear suggestions!
PS: The Read the docs should trigger a build every time there is a merge. It will loop though all the docstrings and update the docs automatically.
Some recent update to HyperSpy or an upstream library breaks the test for automatic peak removal:
> np.testing.assert_almost_equal(s3.data[0, 2, 29], 1, decimal=5)
E AssertionError:
E Arrays are not almost equal to 5 decimals
E ACTUAL: 2.000005802611809
E DESIRED: 1
You can see the full output here: https://github.com/LumiSpy/lumispy/actions/runs/4451990112/jobs/7819241076
The smaller of the two artificial peaks is not caught on every run, if the threshold
is auto
.
I commented out line 70 and 85 in test_cl_spectrum.py
to make the tests run through in #184.
Also I set decimal=4
(instead of 5
) in all assertions as it was loosing accuracy on some runs.
So far, I have not found why the numerics suddenly seem to be less reliable.
In principle, the function should still run, but it seems to be less robust.
Any idea @jordiferrero or @ericpre
I want to implement a simple set of wrappers that use np.loadtxt and np.savetxt to load and save txt, dat, csv format files. Mostly metadata agnostic of course. At least for single spectra as well as linescans. For more than one navigation dimension it would become more complicated, but at least 2d navigation with 0d signal axis (e.g. single parameters from fit results) would also be feasible. It would be helpful to import/transfer data from systems where the native file system is so far not implemented in hyperspy (e.g. Jobim Yvon spectrometer software or systems with self-programmed data collection), but a text format export is possible. On the writer side, I often want to pass on some data to colleagues who do not use hyperspy or even python (the good old Origin crowd).
@ericpre, @dnjohnstone, @jordiferrero would it make more sense to implement that in the io-part of hyperspy for general usage? Otherwise I would implement a specific set of functions just for lumispy?
In the context of taking hyperspectral maps, sometimes you may have a sample where in different regions one would expect peaks at different wavelengths to become dominant.
It's useful to be able to quickly scan over a given wavelength (or energy!) range, and see how the intensity of signals within that range vary spatially.
If implemented correctly, this could be used for a number of different hyperspectral signals.
The Dorset software for Chronos CL systems has this feature, but as far as I know this doesn't exist in Lumispy/ Hyperspy.
Perhaps this specific sort of analysis is outside of the scope of Lumispy. However, I thought something like this might fit well into the utils
section of the library.
I have put together a provisional implementation here:
I've called the feature 'AutoPeakMap'... not sure if this is the best name!
An example would be:
from lumispy.utils.analysis import AutoPeakMap, RangeChannel, plot_auto_peakmap
cl = hs.load(path, signal_type='CL_SEM')
apm = AutoPeakMap(cl,
RangeChannel('red', range_=[730, 760]),
RangeChannel('green', range_=[300, 400]))
apm.plot()
Renders two plots, the 'navigator', in the hyperspy sense:
and the 'signal':
The ranges in the 'navigator' can be moved around interactively and the signal plot will update accordingly.
For more quick and dirty analysis I implemented a convenience method:
plot_auto_peakmap(cl)
Which chooses sensible (enough) defaults for the RangeChannel
s.
If this is something you think could slot into LumiSpy, I'll go ahead and write the tests and expand the docstrings etc.
Feedback on the implementation/ naming more than welcome!
Thanks,
Hugh
The nice lumispy logo doesn't play well with pypi: https://pypi.org/project/lumispy.
To figure out how to fix it, twine check
and https://packaging.python.org/guides/using-testpypi may be useful.
@dnjohnstone Do you know why in Hyperspy the signal classes are in _signals
, while we have them in signals
. Would it make sense to copy the structure here in that aspect?
I have recently been wondering on what the best strategy would be to do the following:
All of my CL-SEM data always comes with SE images with it (I suppose for CL-TEM the equivalent would be a HAADF image). These SE images are acquired at the same time as the luminescence. However, they are not integrated with hyperspy at all.
While it is true that you can load the SE image separately (as s
) and the use this s
object as navigation_axis
for the cl
object, this all needs to be done manually.
What's worse is that when I do some processing on the cl
object (e.g. crop or subindex some of the navigation axes) the s
SE image does not link with the cl
object in anyway.
Ideally, i would like to link these 2 datasets together.
I have an idea to do that, but I wanted too quickly discuss the approach before I get into the coding.
One way would be to keep the SE image array in the metadata and, every time you apply a certain operation that modifies the navigation axis, then those changes are also applied to the SE metadata (e.g. crop pixels). Moreover, I would like to have an option when plotting where we can specify to use the SE image as navigation axis instead of the default "panchromatic".
Can you think of any better ways to implement such feature?
See https://github.com/github/renaming for details.
The master
branch simply needs to be renamed but I open an issue to avoid surprises!
As you set it up @jordiferrero, we have a webhook for read-the-docs running that seems to be regularly failing:
https://github.com/LumiSpy/lumispy/settings/hooks/341637685?tab=deliveries
The rtd integration seems to work otherwise. Any ideas how to fix that?
We should:
From hyperspy/hyperspy-extensions-list#28
=================================== FAILURES ===================================
______________________ TestLumiSpectrum.test_errors_raise ______________________
a = [array([ 0. , 5.44444444, 10.88888889, 16.33333333, 21.77777778,
[27](https://github.com/hyperspy/hyperspy-extensions-list/actions/runs/3362966097/jobs/5575418400#step:16:28).22222222, [32](https://github.com/hyperspy/hyperspy-extensions-list/actions/runs/3362966097/jobs/5575418400#step:16:33).66666667, 38.11111111,...., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])]
@array_function_dispatch(_shape_dispatcher)
def shape(a):
"""
Return the shape of an array.
Parameters
----------
a : array_like
Input array.
Returns
-------
shape : tuple of ints
The elements of the shape tuple give the lengths of the
corresponding array dimensions.
See Also
--------
len : ``len(a)`` is equivalent to ``np.shape(a)[0]`` for N-D arrays with
``N>=1``.
ndarray.shape : Equivalent array method.
Examples
--------
>>> np.shape(np.eye(3))
(3, 3)
>>> np.shape([[1, 3]])
(1, 2)
>>> np.shape([0])
(1,)
>>> np.shape(0)
()
>>> a = np.array([(1, 2), (3, 4), (5, 6)],
... dtype=[('x', 'i4'), ('y', 'i4')])
>>> np.shape(a)
(3,)
>>> a.shape
(3,)
"""
try:
> result = a.shape
E AttributeError: 'list' object has no attribute 'shape'
/usr/share/miniconda3/lib/python3.10/site-packages/numpy/core/fromnumeric.py:20[33](https://github.com/hyperspy/hyperspy-extensions-list/actions/runs/3362966097/jobs/5575418400#step:16:34): AttributeError
During handling of the above exception, another exception occurred:
self = <lumispy.tests.signals.test_luminescence_spectrum.TestLumiSpectrum object at 0x7fc9d6f7e7a0>
def test_errors_raise(self):
s = LumiSpectrum(np.ones(50))
for bkg, error in error_backgrounds:
with pytest.raises(error):
> s.remove_background_from_file(bkg)
/usr/share/miniconda3/lib/python3.10/site-packages/lumispy/tests/signals/test_luminescence_spectrum.py:60:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/share/miniconda3/lib/python3.10/site-packages/lumispy/signals/luminescence_spectrum.py:553: in remove_background_from_file
if np.shape(background_xy)[0] == 1:
<__array_function__ internals>:200: in shape
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = [array([ 0. , 5.44444444, 10.88888889, 16.33333333, 21.77777778,
27.22222222, 32.66666667, 38.11111111,...., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])]
@array_function_dispatch(_shape_dispatcher)
def shape(a):
"""
Return the shape of an array.
Parameters
----------
a : array_like
Input array.
Returns
-------
shape : tuple of ints
The elements of the shape tuple give the lengths of the
corresponding array dimensions.
See Also
--------
len : ``len(a)`` is equivalent to ``np.shape(a)[0]`` for N-D arrays with
``N>=1``.
ndarray.shape : Equivalent array method.
Examples
--------
>>> np.shape(np.eye(3))
(3, 3)
>>> np.shape([[1, 3]])
(1, 2)
>>> np.shape([0])
(1,)
>>> np.shape(0)
()
>>> a = np.array([(1, 2), (3, 4), (5, 6)],
... dtype=[('x', 'i4'), ('y', 'i4')])
>>> np.shape(a)
(3,)
>>> a.shape
(3,)
"""
try:
result = a.shape
except AttributeError:
> result = asarray(a).shape
E ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part.
/usr/share/miniconda3/lib/python3.10/site-packages/numpy/core/fromnumeric.py:20[35](https://github.com/hyperspy/hyperspy-extensions-list/actions/runs/3362966097/jobs/5575418400#step:16:36): ValueError
=============================== warnings summary ===============================
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.