Giter Site home page Giter Site logo

cylammarco / aspired Goto Github PK

View Code? Open in Web Editor NEW
29.0 4.0 4.0 201.08 MB

Automated SpectroPhotometric Image REDuction (ASPIRED)

Home Page: https://aspired.readthedocs.io/en/latest/

License: BSD 3-Clause "New" or "Revised" License

Python 99.90% C 0.05% Common Lisp 0.05%
spectroscopy astronomy astrophysics data-reduction calibration spectrophotometry

aspired's Introduction

Hey 👋, I'm Marco (@cylammarco)

I'm a senior researcher/software developer at The University of Edinburgh, working with the Euclid consortium. My researches cover the topic of white dwarfs⭐, faint blue pulsators🌟, star formation history of the Galaxy🌌, and spectral data reduction software development 🏳️‍🌈👾.

For the reason of work, I have been using a private GitLab host since late 2023. I am still actively coding :)

aspired's People

Contributors

cylammarco avatar dependabot-preview[bot] avatar dependabot[bot] avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

aspired's Issues

2D sky model

Future enhancement as suggested from the MAAT science kick-off meeting.

Support user supplied line list

Most observatories provide the arc line list for their lamps, such information can massively improve the reliability of automated wavelength calibration.

sensitivity curve should be a polynomial fit

compute_sensitivity is using an interpolation function instead of a polynomial fit. This leads to a noisy sensitivity curve, which is significantly worse near the detector edges.

Need a plotting library that is independent of plotly-orca

plotly-orca is very unfriendly in terms of installation effort, an independent static plotting library should be used, for example, default to matplotlib and having plotly and plotly-orca become optional dependencies. Recently also come across a server not being able doing background plotting because of the lack of a display, which was solved by using Xvfb, again, it works but it is a very user unfriendly setup.

Transcribing extensive FITS headers from the raw data

Just starting a discussion.

Currently ASPIRED generates FITS with a bare minimal header. The input files of course include extensive data about the observation circumstances, telescope and the instrument. In order to integrate the data into a complete workflow there needs to be some way of propagating all those data through into the headers of the resulting data. As a trivial example, I have just reduced a couple of hundred SPRAT frames through ASPIRED, but now have no easy way of telling which were obtained in the BLUE config and which in the RED.

So my questions is whether there is any plan for this sort of feature in ASPIRED and also whether it makes any sense anyway?

It would be very straight forward to have ASPIRED transcribe everything from the input FITS file, but you could easily end up with invalid FITS headers. If ASPIRED just copies the full header for some arbitrary instrument it has no way of knowing whether or not those values make sense after extraction.

You could have an option to transcribe certain nominated values to the output FITS, but it might end up an enormous list that is difficult to keep up to date.

Is it better to just say that the user needs to write all that into their own wrapper and handle it for themselves. ASPIRED only ever creates a barest minimum header of parameters that it controls itself.

If the latter, then the pipeline workflow is something like

  • Run ASPIRED
  • Save ASPIRED result into a temporary file on disk
  • Create your own output file with your own bespoke headers
  • Transcribe data from the temporary aspired file into your final file.
    This seems a bit clumsy, but it is flexible and very general, allowing you do format your own outputs however you like.

ImageReduction specification input list requires arc

It appears to be mandatory that the text list file passed to ImageReduction() includes an arc, but it is not tested and trapped if missing. I wanted to use the tracing and extraction routines without doing a wavelength calibration so I did not include an arc and got the following error.


IndexError Traceback (most recent call last)
in
----> 1 science_frame = aspired.ImageReduction('RJSTest/v_e_20160808_20_1_0_1.list');

/data/LT/Commissioning/SPRAT/Aspired/ASPIRED/aspired/aspired.py in init(self, filelistpath, ftype, saxis, saxis_keyword, combinetype_light, sigma_clipping_light, clip_low_light, clip_high_light, exptime_light, exptime_light_keyword, combinetype_dark, sigma_clipping_dark, clip_low_dark, clip_high_dark, exptime_dark, exptime_dark_keyword, combinetype_bias, sigma_clipping_bias, clip_low_bias, clip_high_bias, combinetype_flat, sigma_clipping_flat, clip_low_flat, clip_high_flat, silence)
239 dtype='str',
240 autostrip=True)
--> 241 self.imtype = self.filelist[:, 0]
242 self.impath = self.filelist[:, 1]
243 try:

IndexError: too many indices for array

After adding an arc line to the text file with a dummy file name assigned to "arc" that I will never use it works OK.

Descriptive labels in FITS extensions

The final output FITS from OneDSpec.savefits() can have up to about 15 extensions. I am struggling to tell them all apart and what is in each. Could description labels be added to each FITS header?

If I use
OneDsSpec.save_fits(output='flux_resampled')
then I get four HDUs. Three are labelled "flux" and one is labeled "sensitivity". When I add more options into the output='' string then I get no labels at all. The headers at the moment are extremely sparse and I assume that is a work in progress.

The S/N plot looks wrong

The S/N shown in the ap_extract plot seems too small, perhaps an extra/missed a square root somewhere?

Install without the test data

I appreciate this is probably an administrative headache to reorganise the repository, but is there some easy mechanism by which I can install updates without having to pull the many Mb of test data? I am using
pip install git+https://github.com/cylammarco/ASPIRED@dev
and it takes about 15min. I know it would also be possible to clone the repository once and pull updates, but you previously suggested that the pip install was the safer option until the other setup.py issues get cleaned up. Basically, does this require reorganising the repository or is there some pip magic I can use from my end to save myself downloading all those FITS? 15min is not the end of the world, so this is not a huge issue. Feel free to say I just have to live with it for the time being!

WCS in adu_resampled FITS extensions

When I save with
OneDSpec.save_fits(output='flux_resampled+wavecal+flux+adu+adu_resampled')
I was interpreting flux_resampled and adu_resampled as being the same size arrays and the same wavelength coordinates. Is that correct? Certainly they are the same dimensions. By contrast 'adu' has the same dimensions as the original CCD without resampling.

SO my question is, is the WCS in 'adu_resampled' the same as 'flux_resampled'. Can I simply read the WCS from 'flux_resampled' and use if for 'adu_resampled'? Will the WCS eventually be written into 'adu_resampled' too?

Version specific dev branch

Still at the early development stage, however, we should start separating the current version dev/maintenance branch (for patch release) and a dev branch for minor releases.

pip struggles to install ASPIRED

Reported from Ilknur:

pip install git+https://github.com/cyalmmarco/ASPIRED.git@dev installs the dist-info but not the package

however
git clone https://github.com/cyalmmarco/ASPIRED.git@dev then pip install -e ASPIRED works

Is it a problem with pip or with setup.py? Probably the latter?

ap_extract fails near edge

ap_extract may not work near the edges of an image, it definitely does not work near edges in force-extraction mode.

Cannot save spectrum in instrumental coordinates (counts vs pix)

This is a pretty trivial one. It's not something I really absolutely need at the moment so it's not urgent.

I tried to use
oneDSpec.save_fits(output='count')
to save off a copy of the basic extraction in instrumental units, I.e., just photo-electrons (or counts) against CCD pixel. I got the error message.

Traceback (most recent call last):
File "/data/lt/Commissioning/SPRAT/Aspired/test_aspired_July2020.py", line 130, in
overwrite=True)
File "/Users/rjs/miniconda3/envs/astroconda/lib/python3.6/site-packages/aspired/spectral_reduction.py", line 7630, in save_fits
raise Error('Neither wavelength nor flux is calibrated.')
NameError: name 'Error' is not defined

After I have done the wavelength and flux calibrations, then save_fits() works and will indeed save off a file in exactly the format I wanted, but it seems a bit odd to insist that you must wavelength calibrate the data before you can save off the uncalibrated version. As I say, I can certainly live without it for now, but thought I'd point it out.

fit_coeff param error in WavelengthCalibration.fit()

Calling WavelengthCalibration.fit() with the optional display plot switch on is giving me an error for parameter fit_coeff.

I think the issue is that fit_coeff does not exist as a parameter in rascal.calibrator.plot_fit(). Possibly I just have the wrong rascal? I (think I) am running the rascal dev branch.

target1D.fit(max_tries=50, stype='science+standard', display=True)

Traceback (most recent call last):
File "/data/lt/Commissioning/SPRAT/Aspired/test_aspired_July2020.py", line 153, in
target1D.fit(max_tries=50, stype='science+standard', display=args.displayplots)
File "/Users/rjs/miniconda3/envs/astroconda/lib/python3.6/site-packages/aspired/spectral_reduction.py", line 7115, in fit
filename=filename)
File "/Users/rjs/miniconda3/envs/astroconda/lib/python3.6/site-packages/aspired/spectral_reduction.py", line 4091, in fit
filename=filename)
TypeError: plot_fit() got an unexpected keyword argument 'fit_coeff'

compute_sencurve(), spectres() limit bound error

Just in order to get an idea of how a basic reduction would flow, I have been running the included Jupyter notebooks. They seem to work fine for me up to compute_sencurve() where they generate an exception in spectres.

Python v3.7.4
ASPIRED version: git cloned today 2019-09-13
spectres v2.0.0


ValueError Traceback (most recent call last)
in
2 lhs6328_reduced = aspired.OneDSpec(lhs6328, wavecal, standard=hilt102, wave_cal_std=wavecal, flux_cal=fluxcal)
3 lhs6328_reduced.apply_wavelength_calibration('all')
----> 4 lhs6328_reduced.compute_sencurve(kind='cubic')
5 lhs6328_reduced.inspect_sencurve()

/data/LT/Commissioning/SPRAT/Aspired/ASPIRED/aspired/aspired.py in compute_sencurve(self, kind, smooth, slength, sorder, display)
1414 # resampling both the observed and the database standard spectra
1415 # in unit of flux per second
-> 1416 flux_std = spectres(self.wave_std_true, self.wave_std, self.adu_std / self.exptime_std)
1417 flux_std_true = self.fluxmag_std_true
1418 else:

~/miniconda2/envs/astroconda/lib/python3.7/site-packages/spectres/spectral_resampling.py in spectres(new_spec_wavs, old_spec_wavs, spec_fluxes, spec_errs)
70 # Check that the range of wavelengths to be resampled_fluxes onto falls within the initial sampling region
71 if filter_lhs[0] < spec_lhs[0] or filter_lhs[-1] > spec_lhs[-1]:
---> 72 raise ValueError("spectres: The new wavelengths specified must fall within the range of the old wavelength values.")
73
74 #Generate output arrays to be populated

ValueError: spectres: The new wavelengths specified must fall within the range of the old wavelength values.

Can save_fits() write out spectra that are not yet flux calibrated?

save_fits() calls _create_fits() which assumes the flux calibration has already been applied. I believe users will frequently want to save off the spectrum at various stages along the reduction. In particular it will be extremely common for observers to extract and wavelength calibrate an observation but not have a standard available. Even if they do intend to flux calibrate, they may simply have observed the target before the standard.

More efficient handling of _spectrum1D

The current design produces duplicated copies of _spectrum1D objects in onddspec.wavecal and onedspec.fluxcal. We should design a structure to allow shared _spectrum1D objects so that both wavecal and fluxcal are accessing and modifying the same memory to avoid duplication error.

Discussion of defaults on inspect_reduced_spectrum()

What is the proper venue for discussions? The fact this page is called "issues" sort of implies something is wrong. I would not go that far. I just want to raise a question about selecting parameter defaults.

Currently we have

inspect_reduced_spectrum(self, wave_min=4000., wave_max=8000., ...

That is awfully SPRAT specific. Whilst that is obviously best for me and makes my life easier playing with SPRAT data, it seems to me that better defaults would either a) the entire detector array or b) some clever automated guess based on, for example where the ap_trace fails and drops to zero. I would suggest that a is enough for release v1 and you can add b to the ideas list.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.