Giter Site home page Giter Site logo

sail-labs / amical Goto Github PK

View Code? Open in Web Editor NEW
9.0 4.0 6.0 115.74 MB

Extraction pipeline and analysis tools for Aperture Masking Interferometry mode of the last generation of instruments (ground-based and space).

Home Page: https://sail-labs.github.io/AMICAL/

License: MIT License

Python 100.00%
instrumentation pipelines interferometry data-processing jwst eso-vlt

amical's People

Contributors

dependabot[bot] avatar drsoulain avatar neutrinoceros avatar pre-commit-ci[bot] avatar tomasstolker avatar vandalt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

amical's Issues

ValueError for scatter plot color when plotting mask coordinates

I recently did a fresh install and matplotlib throws an error (shown at the end of the issue) when I run the doc/example_NIRISS.py script. It seems to be caused by c='' on line 41 of the _plot_mask_coord function. I tried changing the value to c='None' and I get the expected result (empty blue hexagons). I made the change locally, so I could submit a small PR with this modification.

The full error message:

Traceback (most recent call last):
  File "/home/vandal/miniconda3/envs/amical/lib/python3.9/site-packages/matplotlib/axes/_axes.py", line 4330, in _parse_scatter_color_args
    colors = mcolors.to_rgba_array(c)
  File "/home/vandal/miniconda3/envs/amical/lib/python3.9/site-packages/matplotlib/colors.py", line 367, in to_rgba_array
    raise ValueError("Using a string of single character colors as "
ValueError: Using a string of single character colors as a color sequence is not supported. The colors can be passed as an explicit list instead.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/vandal/astro/AMICAL/doc/example_NIRISS.py", line 49, in <module>
    bs_t = amical.extract_bs(cube_t, file_t, targetname='fakebinary',
  File "/home/vandal/astro/AMICAL/amical/mf_pipeline/bispect.py", line 1023, in extract_bs
    mf = make_mf(maskname, infos.instrument, infos.filtname, npix, peakmethod=peakmethod,
  File "/home/vandal/astro/AMICAL/amical/mf_pipeline/ami_function.py", line 359, in make_mf
    _plot_mask_coord(xy_coords, maskname, instrument)
  File "/home/vandal/astro/AMICAL/amical/mf_pipeline/ami_function.py", line 40, in _plot_mask_coord
    plt.scatter(xy_coords[i][0], xy_coords[i][1],
  File "/home/vandal/miniconda3/envs/amical/lib/python3.9/site-packages/matplotlib/pyplot.py", line 3037, in scatter
    __ret = gca().scatter(
  File "/home/vandal/miniconda3/envs/amical/lib/python3.9/site-packages/matplotlib/__init__.py", line 1352, in inner
    return func(ax, *map(sanitize_sequence, args), **kwargs)
  File "/home/vandal/miniconda3/envs/amical/lib/python3.9/site-packages/matplotlib/axes/_axes.py", line 4496, in scatter
    self._parse_scatter_color_args(
  File "/home/vandal/miniconda3/envs/amical/lib/python3.9/site-packages/matplotlib/axes/_axes.py", line 4339, in _parse_scatter_color_args
    raise ValueError(
ValueError: 'c' argument must be a color, a sequence of colors, or a sequence of numbers, not

It seems to be caused by a recent change in matplotlib because I never had the issue in the past.

TST: failing CI

opening this issue to acknowledge the fact that AMICAL's CI is currently failing due to a change in setuptools, whose latest version emits a deprecation warning. There's probably nothing to be done on our end. The issue has been reported upstream as pypa/setuptools#2885

todo: reactivate isort

for reasons unclear to me atm, we couldn't make isort play nice with black. This is definitely solvable, I'll inspect this later

option to suppress tqdm in extract_bs

At line 119 of bispect.py, there is a tqdm loop for 'Extracting in the cube'.

If you want to loop over many files, you should be able to disable this progress bar.

I suggest using the disable keyword argument of tqdm : tqdm(..., disable=True).

Same applies to the 'CP covariance' tqdm in _compute_cp_cov

Adding ERIS SAM

Dear developers of AMICAL,

I'm participating in the commissioning of the new ERIS instrument at VLT and, along with the two coronagraphs, I'm responsible for the 3 SAM masks in NIX (SAM-7, 9, and 23). How can I add support for those in AMICAL? Is there a simple workflow to add support for a new instrument?

Thanks for your help!

Jean

TST: CI instability (setuptools + munch)

CI is currently unstable because (see https://github.com/SAIL-Labs/AMICAL/actions/runs/4180212707)
What happens is that the newest version of setuptools (namely 67.3), which provides pkg_resources, deprecates pkg_resources.declare_namespece (see pypa/setuptools#3434), which is used by munch.

This has been a problem with munch for a while already (see Infinidat/munch#67), and it's actually been solved on the dev branch for 3 years, but the patch was never released, and now the repo looks essentially unmaintained.

I would advocate that we should try to get rid of munch as a dependency, though I may not be fully aware of what challenges this poses. TBD with @DrSoulain

Enable combining integrations from multiple FITS files

For cases where the observations are split in multiple FITS files with the exact same setup, it would be nice to have a straightforward way to combine the observations. A good data set to test this would be the JWST ERS AMI observations. Here are the potential options I have in mind:

  • Don't change anything in AMICAL and let users create their own combined FITS cubes before using AMICAL.
  • Make cleaning and extraction functions accept a cube and a header instead of a filename.
  • Make cleaning and extraction functions accept a list of filenames instead of a single one.
  • Don't change anything in the extraction and combine interferometric observable before searching for companions.

The advantage of the last one is that it enables correcting for potential PA difference between the cubes. For example the ERS data has a 2e-6 deg (~ 7 mas) ROLL_REF difference between consecutive cubes. The commissioning PA results gave ~ 0.05 deg precision on the fitted PA for AB Dor, so it would be negligible in this case, but maybe it could be a problem with future datasets?

@DrSoulain @benjaminpope @neutrinoceros what is your opinion on this?

Calibrating pupil-stabilized data

Hello AMICAL team,

Thanks a lot for providing this great pipeline!

I have a question about calibrating data that are obtained with pupil instead of field tracking.

Sorry if I may have misunderstood how this works because I am new with NRM data analysis. As far as I could tell, AMICAL only supports the use of a constant value for the position angle (i.e. the pa parameter), correct? With pupil-stabilized observations, the value of pa changes during the observation, which then nicely increases the uv coverage. In tools.compute_pa, the mean position angle is returned, while I would have expected that the actual position angle of each frame is required?

I managed to reduce a SPHERE dataset by simply creating a for loop over the exposure/angles, but storing the data with save (by passing a list to observables) and then running the analysis tools does not seem to work. I think because observables may support a list of observables from different wavelengths but not from different position angles?

Is there something that I have misunderstood or does AMICAL simply not (yet?) support the calibration of pupil-tracking data?

Thanks a lot and keep up the great work 😊

Data cube with one exposure (i.e. `cube.shape[0]==1`) causes indexing or division error.

When using a data cube with only one exposure (i.e. cube.shape[0]==1), I encountered two errors in extract_bs():

  • An IndexError if clipping had been done before and rejected the only frame in in the cube and extract_bs() had an empty input.

    Error message
      bs = amical.extract_bs(
      File "/home/vandal/projects/amical/amical/mf_pipeline/bispect.py", line 1138, in extract_bs
        ft_arr, n_ps, npix = _construct_ft_arr(cube)
      File "/home/vandal/projects/amical/amical/mf_pipeline/bispect.py", line 227, in _construct_ft_arr
        n_pix = cube.shape[1]
      IndexError: tuple index out of range
    
  • A ZeroDivisionError if no clipping was done and extract_bs() received a cube with one frame.

    Error message
     bs = amical.extract_bs(
     File "/home/vandal/projects/amical/amical/mf_pipeline/bispect.py", line 1242, in extract_bs
         v2_quantities = _compute_v2_quantities(v2_arr_unbiased, bias_arr, n_blocks)
     File "/home/vandal/projects/amical/amical/mf_pipeline/bispect.py", line 537, in _compute_v2_quantities
         ind2 = (k + 1) * n_ps // (n_blocks - 1)
     ZeroDivisionError: integer division or modulo by zero
    

As discussed with @DrSoulain, this should not occur often and is currently not supported. It might still be worth adding an error to catch these two cases in extract_bs and raise a more informative error message to clarify what is happening.

AMICAL with VISIR data

Hello AMICAL developers!

As I noted on ESO's VISIR website, this code is the proposed tool for SAM data reduction. But is there a manual for it? I saw there is an example tutorial just for SPHERE.

Apologies in advance if this question should not be on Issues.

Thank you for your time !

`_compute_center_splodge()` output location does not match the central splodge

Hello,

I'm not 100% sure about this, but from what I understand the goal of _compute_center_splodge() is to find the central splodge location, both in the shifted and non-shifted power spectrum. I observed two things with the function:

  1. (Pretty sure this one is a bug) The "centered" version (second element of the output) is always empty, because the meshgrid call in the function uses integers instead of vectors as input. It does return a splodge-shaped set of points when fixing that. This product does not seem to be re-used anywhere, so this does not affect other parts of AMICAL.

  2. (I don't 100% understand this one) Both versions have a *0.6 factor on the np.where() output. This seems to bring the splodge in the middle of the power spectrum instead of in the corners for the shifted case, and it moves it off the center for the centered one. I was wondering why this 0.6 factor was there (@DrSoulain) ? Removing it seems to bringi the indices back to the central splodge for NIRISS data I've tested. If this is indeed a bug, it probably does not affect anything as the pixels fromt he *0.6 fall in regions where the PS is already zero, so no other splodges are masked (and the matched filter does not match the central splodge so this does not affect the extraction in this way either).

Here is an example that generates the index (the printed outptut contains both products from above):

from amical.mf_pipeline.ami_function import _compute_center_splodge
from amical.get_infos_obs import get_pixel_size, get_wavelength

npix = 80
filt = get_wavelength("NIRISS", "F480M")
pixelsize = get_pixel_size("NIRISS")
print(_compute_center_splodge(npix, pixelsize, filt))

Option to save CANDID plots

It would be very useful to be able to save the plots CANDID produces, maybe with an optional "save" parameter in amical.plot_model() and amical.candid_cr_limit(), with either a default name based on the input OIFITS file or a custom name that the user can pass as an optional argument.

import time is extremely long

using tuna (pip install tuna), one can profile amical import time

python -X importtime -c "import amical" 2> tuna.log && tuna tuna.log

Screenshot 2022-07-21 at 17 49 53

So it takes about 2s on my machine (that's 20x longer than numpy, for reference) most of which being caused by candid and other heavy dependencies. I think there's a lot of room for improvement.

"double display" of figures in Jupyter notebooks

using AMICAL's api to produce figures in a notebook sometimes results in a single figure being displayed more than one time. Most likely this is due to a combination of calling plt.show() internally (which should be avoided) and returning handles to figures created (which is a good thing).

I think that this will be solved when we implement an actual CLI, and reserved calling plt.show there, rather than inside functions.

BUG: AMICAL disables all warnings

some modules contain the following code

import warnings
warnings.filterwarnings("ignore")

This is a big weakness in AMICAL because it has very undesirable side effects: the state of the warnings module is shared globally. This means that importing AMICAL disables all warnings emitted, not just the ones emitted from inside AMICAL itself.
This denies the whole purpose of warnings, so users importing AMICAL won't see any deprecation warning coming at them from, say, numpy or matplotlib, and will miss the chance to update their script before they update their environment with versions that break them.

I really don't think we should embrace this and it ought to be fixed at some point.

On a related note, a bunch of print statements are utilised to display errors, but they print to stdout instead of stdin. This could be fixed independently mind you.

Originally posted by @neutrinoceros in #60 (comment)

Bug with computation statistical significance in CANDID

There was a bug with the statistical significance computation in CANDID. If I remember correctly my offline discussion with @DrSoulain, the bug is not yet fixed in AMICAL. I just wanted to open an issue to track this. The bug has been fixed in CANDID, so the solution can probably be found somewhere in their code and copied in AMICAL relatively easily. The warning about this in CANDID's README is quoted below.

WARNING: before version 1.0.5 (April 2021), there is a bug in the computation of the statistical significance: the computation are correct below ~4 sigmas, but get progressively overestimated, to double by 14 sigma. In practice, this does not change results, as the significance is only important around or below 4 sigmas. Also, note that the number of sigma cannot go above ~8 with the new (and correct) formula.

MNT: dependabot is configured but not enabled

I just noticed that since #158, there has been no auto-updates from dependabot, but some dependencies are now outdated, so my conclusion is that someone with owner privileges needs to enable it in Settings.

Pymask crashes due to missing DATE-OBS in simulated data

For simulated data, a few keys, including DATE-OBS, are not saved in the infos attribute of the extraction result (see code here). Then the save function simply replaces DATE-OBS with an empty string. This causes Pymask to crash for simulated data, even if the simulated data had a DATE-OBS to begin with (because it is lost in the extraction step), see the error message below.

I fixed this locally by making _add_infos_header parse add_keys for simulated data too, but with a try/except and using hdr[key] to access info. This way to behaviour is unchanged for simulated data when keys are missing. This fixes the error for simulated data that has a DATE-OBS value.

For simulated data without a DATE-OBS originally (this is the case of the example NIRISS data), two options I see would be to use a placeholder in the _add_infos_header function and/or to add logic inside Pymask to handle empty DATE-OBS without crashing. The latter seems more intuitive, so that if the original observations have no DATE-OBS, the extracted oifits also has none, but I'm not familiar with fits/oifits conventions.

If you want I can send a patch, just let me know which option is best for the empty DATE-OBS case.

Traceback (most recent call last):
  File "example_analysis.py", line 78, in <module>
    fit2 = amical.pymask_grid(inputdata, **param_pymask)
  File "/home/vandal/astro/AMICAL/amical/analysis/easy_pymask.py", line 52, in pymask_grid
    cpo = pymask.cpo(input_data)
  File "/home/vandal/astro/AMICAL/amical/externals/pymask/cpo.py", line 74, in __init__
    self.extract_multi_oifits(oifits)
  File "/home/vandal/astro/AMICAL/amical/externals/pymask/cpo.py", line 126, in extract_multi_oifits
    self.extract_from_oifits(f)
  File "/home/vandal/astro/AMICAL/amical/externals/pymask/cpo.py", line 81, in extract_from_oifits
    data = oifits.open(filename)
  File "/home/vandal/astro/AMICAL/amical/externals/pymask/oifits.py", line 2030, in open
    int(date[0]), int(date[1]), int(date[2])
ValueError: invalid literal for int() with base 10: ''

Enhancement: Make keyword arguments of `clean_data`and `show_clean_params`compatible

When using the cleaning functionality in a script, I find it convenient to use **clean_params with a pre-defined dictionary as done in the examples. This is especially useful in scripts that rely on a config file which contains an arbitrary (a priori unkown) set of of cleaning parameters. However, if I'm not mistaken, one set of cleaning params cannot currently contain all cleaning parameters for both clean_dataand show_clean_data, because of some kwargs that are in one but not the other

It would be convenient if the clean_params dictionary could contain all cleaning-related parameters without causing an error because of extra kwargs. I think a simple solution to this would be to add **kwargs in the signature of both functions. @DrSoulain , @neutrinoceros do you think this would work ?

Question about the super-Gaussian function

Hello AMICAL team!

The super-Gaussian function that is used for the windowing of AMI data is typically provided in some sort of form like exp(-r^4/r_0), of what I could find in a few articles.

Sorry if I may have misunderstood, but when I looked at the super_gaussian function in the tools module of AMICAL, the window function seemed somewhat different so I was wondering if this is an adjusted form that works better than the "original" form? Effectively the implementation in AMICAL seems to scale as exp(-r^6) instead of exp(-r^4)?

I wanted to suggest to add a reference and/or a few details about this in the documentation, since this seems differently provided in the AMICAL reference and then also incorrectly adopted into e.g. Blakely et al. 2022. Also, the amplitude a=1 in those two publications seems a typo since the constant should probably be before the exp term?

The description of the window parameter in the docstring seems also not fully accurate since it is described as the FWHM of the Gaussian. I think that it might be the HWHM instead? Or perhaps half of a standard deviation? Since in the argument that is provided to super_gaussian is sigma=2*window. I wasn't fully sure how to interpret that. Adding a docstring would also be helpful here.

The actual windowing seems to work fine though 😊 and when I plot the window profile then it looks reasonable (i.e. it has a sharp slope around the specified window value).

I would be happy to create a PR if that would be helpful. Just let me know in that case.

TST: instability on bleeding-edge CI

For a week now, the bleeding edge CI job has been failing, but pytest is actually erroring out on collection, it's not an internal issue.
I think I've identified the issue in that we install numpy from source (currently 1.23.0.dev...) but scipy forces downgrading, which results in an ABI incompatibility when when we import matplotlib (compiled with source numpy).

The fix is probably to install SciPy from source as well

Add option to disable display in CLI

I think it would be nice to be able to disable displaying the figures when processing multiple files via CLI. Maybe adding a --display (or --no-display) flag as well as an option to save figures? If both showing and saving are off, then figure creation can be skipped completely.

Enhancement: Support per-integration bad pixel map

Currently, the bad pixel map seems to be a single 2D map passed for all integrations in the data cube, is the correct ? If so, for cases where there is one 2D map per integration, would it be OK to add the ability to recognize "bad pixel cubes" in the clean_data and show_clean_data ? I could send a PR with these changes this week.

SPHERE IFS observations

Hi,
I am having some issues trying to reduce SPHERE IFS observations. I have a cube with dimensions (39, 96, 290, 290) and when I try to run select_clean_data it returns an error message on the depth of the fits file.
Since SPHERE-IFS is described in get_infos_obs (with a fits file containing the wavelength information) I had assumed that I did not need to do anything to the fits file but I surely missed something. It looks like there is an extra dimension to the fits file that amical doesn't really like but I am not sure what I should do instead.

Below is what I am trying to do and the error message I am getting.
Thanks a bunch for providing such a useful tool!

datadir = "./"
file_t = os.path.join(datadir, "IFS_file.fits")
clean_param = {"isz": 149, "r1": 70, "dr": 2, "apod": True, "window": 65, "f_kernel": 3}
clip = True
cube_t = amical.select_clean_data(file_t, clip=clip, **clean_param, display=True)

and the error message is the following:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
File ~/.local/lib/python3.8/site-packages/amical/tools.py:70, in crop_max(img, dim, offx, offy, filtmed, f)
     69 try:
---> 70     im_med = medfilt2d(img, f)
     71 except ValueError:

File ~/.local/lib/python3.8/site-packages/scipy/signal/signaltools.py:1863, in medfilt2d(input, kernel_size)
   1861         raise ValueError("Each element of kernel_size should be odd.")
-> 1863 return sigtools._medfilt2d(image, kernel_size)

ValueError: object too deep for desired array

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
Input In [3], in <cell line: 3>()
      1 clean_param = {"isz": 149, "r1": 70, "dr": 2, "apod": True, "window": 65, "f_kernel": 3}
      2 clip = True
----> 3 cube_t = amical.select_clean_data(file_t, clip=clip, **clean_param, display=True)

File ~/.local/lib/python3.8/site-packages/amical/data_processing.py:598, in select_clean_data(filename, isz, r1, dr, edge, clip, bad_map, add_bad, offx, offy, clip_fact, apod, sky, window, darkfile, f_kernel, verbose, ihdu, display)
    595 if add_bad is None:
    596     add_bad = []
--> 598 cube_cleaned = clean_data(
    599     cube,
    600     isz=isz,
    601     r1=r1,
    602     edge=edge,
    603     bad_map=bad_map,
    604     add_bad=add_bad,
    605     dr=dr,
    606     sky=sky,
    607     apod=apod,
    608     window=window,
    609     f_kernel=f_kernel,
    610     offx=offx,
    611     offy=offy,
    612     darkfile=darkfile,
    613     verbose=verbose,
    614 )
    616 if cube_cleaned is None:
    617     return None

File ~/.local/lib/python3.8/site-packages/amical/data_processing.py:482, in clean_data(data, isz, r1, dr, edge, bad_map, add_bad, apod, offx, offy, sky, window, darkfile, f_kernel, verbose)
    479 img1 = _remove_dark(img1, darkfile=darkfile, verbose=verbose)
    481 if isz is not None:
--> 482     im_rec_max = crop_max(img1, isz, offx=offx, offy=offy, f=f_kernel)[0]
    483 else:
    484     im_rec_max = img1.copy()

File ~/.local/lib/python3.8/site-packages/amical/tools.py:73, in crop_max(img, dim, offx, offy, filtmed, f)
     71     except ValueError:
     72         img = img.astype(float)
---> 73         im_med = medfilt2d(img, f)
     74 else:
     75     im_med = img.copy()

File ~/.local/lib/python3.8/site-packages/scipy/signal/signaltools.py:1863, in medfilt2d(input, kernel_size)
   1860     if (size % 2) != 1:
   1861         raise ValueError("Each element of kernel_size should be odd.")
-> 1863 return sigtools._medfilt2d(image, kernel_size)

ValueError: object too deep for desired array

Example scripts get stuck in a loop

The following script (run from the top level of the repo) fails badly on my machine (gets stuck in a loop spawning RuntimeErrors)

cd doc
python example_NIRISS.py
python example_analysis.py

Interestingly, @vandalt reports that the same script works on his install ! so let's see if we can figure out what's the important difference here:

OS : OSX 11.4
Python version: 3.9.4
matplotlib version : 3.4.2
Astropy version : 4.2.1
amical version : current master (19e522e)

I installed Python with pyenv + virtualenv, then AMICAL and its deps with python -m pip install -e .

Numpy/Astropy PyObject `RuntimeWarning` not ignored when testing a single file with Python 3.10

When running pytest on a single file with Python 3.10, I get the following warning: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 96 from PyObject. I seems to be ignored by pyproject.toml, but is not ignored when running a single module (e.g. pytest amical/tests/test_cli.py). In Python 3.9 everything seems to work as expected.

TST: set a shorter timeout limit

Right now we’re on the default timeout limit (6h), and seeing some jobs reaching that limit. In a nominal run, the test suite only takes couple minutes to run so it’d be useful to get the limit closer to the actual runtime so we get notified earlier.

Astropy commentary cards not handled by munch

When reading a fits file that has _HeaderCommentaryCards keys, munch seems to interpret them as mappings and looks for a .keys. This causes amical.extract_bs to crash with the error message below.

Click to see the error message
AttributeError                            Traceback (most recent call last)
<ipython-input-1-a4288c748d4a> in <module>
     99 # Extract raw complex observables for the target and the calibrator:
    100 # It's the core of the pipeline (amical/mf_pipeline/bispect.py)
--> 101 bs_t = amical.extract_bs(cube_t, file_t, targetname=target, **params_ami, display=True)
    102
    103 # %%

~/astro/AMICAL/amical/mf_pipeline/bispect.py in extract_bs(cube, filename, maskname, filtname, t
argetname, instrum, bs_multi_tri, peakmethod, hole_diam, cutoff, fw_splodge, naive_err, n_wl, n_
blocks, theta_detector, scaling_uv, i_wl, unbias_v2, compute_cp_cov, expert_plot, verbose, displ
ay)
   1155         cprint("\nDone (exec time: %d min %2.1f s)." %
   1156                (m, t - m * 60), color="magenta")
-> 1157     return dict2class(obs_result)

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in munchify(x, factory)
    440         return partial
    441
--> 442     return munchify_cycles(x)
    443
    444

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in munchify_cycles(obj)
    412         seen[id(obj)] = partial = pre_munchify(obj)
    413         # Then finish munchifying lists and dicts inside obj (reusing munchified obj if
cycles are encountered)
--> 414         return post_munchify(partial, obj)
    415
    416     def pre_munchify(obj):

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in post_munchify(partial,
 obj)
    431         # might be involved in a cycle
    432         if isinstance(obj, Mapping):
--> 433             partial.update((k, munchify_cycles(obj[k])) for k in iterkeys(obj))
    434         elif isinstance(obj, list):
    435             partial.extend(munchify_cycles(item) for item in obj)

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in update(self, *args, **
kwargs)
    232         be defined in subclasses.
    233         """
--> 234         for k, v in iteritems(dict(*args, **kwargs)):
    235             self[k] = v
    236

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in <genexpr>(.0)
    431         # might be involved in a cycle
    432         if isinstance(obj, Mapping):
--> 433             partial.update((k, munchify_cycles(obj[k])) for k in iterkeys(obj))
    434         elif isinstance(obj, list):
    435             partial.extend(munchify_cycles(item) for item in obj)

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in munchify_cycles(obj)
    412         seen[id(obj)] = partial = pre_munchify(obj)
    413         # Then finish munchifying lists and dicts inside obj (reusing munchified obj if
cycles are encountered)
--> 414         return post_munchify(partial, obj)
    415
    416     def pre_munchify(obj):

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in post_munchify(partial,
 obj)
    431         # might be involved in a cycle
    432         if isinstance(obj, Mapping):
--> 433             partial.update((k, munchify_cycles(obj[k])) for k in iterkeys(obj))
    434         elif isinstance(obj, list):
    435             partial.extend(munchify_cycles(item) for item in obj)

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in update(self, *args, **
kwargs)
    232         be defined in subclasses.
    233         """
--> 234         for k, v in iteritems(dict(*args, **kwargs)):
    235             self[k] = v
    236

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in <genexpr>(.0)
    431         # might be involved in a cycle
    432         if isinstance(obj, Mapping):
--> 433             partial.update((k, munchify_cycles(obj[k])) for k in iterkeys(obj))
    434         elif isinstance(obj, list):
    435             partial.extend(munchify_cycles(item) for item in obj)

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in munchify_cycles(obj)
    412         seen[id(obj)] = partial = pre_munchify(obj)
    413         # Then finish munchifying lists and dicts inside obj (reusing munchified obj if
cycles are encountered)
--> 414         return post_munchify(partial, obj)
    415
    416     def pre_munchify(obj):

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in post_munchify(partial,
 obj)
    431         # might be involved in a cycle
    432         if isinstance(obj, Mapping):
--> 433             partial.update((k, munchify_cycles(obj[k])) for k in iterkeys(obj))
    434         elif isinstance(obj, list):
    435             partial.extend(munchify_cycles(item) for item in obj)

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in update(self, *args, **
kwargs)
    232         be defined in subclasses.
    233         """
--> 234         for k, v in iteritems(dict(*args, **kwargs)):
    235             self[k] = v
    236

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in <genexpr>(.0)
    431         # might be involved in a cycle
    432         if isinstance(obj, Mapping):
--> 433             partial.update((k, munchify_cycles(obj[k])) for k in iterkeys(obj))
    434         elif isinstance(obj, list):
    435             partial.extend(munchify_cycles(item) for item in obj)

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in munchify_cycles(obj)
    412         seen[id(obj)] = partial = pre_munchify(obj)
    413         # Then finish munchifying lists and dicts inside obj (reusing munchified obj if
cycles are encountered)
--> 414         return post_munchify(partial, obj)
    415
    416     def pre_munchify(obj):

~/.pyenv/versions/amical/lib/python3.7/site-packages/munch/__init__.py in post_munchify(partial,
 obj)
    431         # might be involved in a cycle
    432         if isinstance(obj, Mapping):
--> 433             partial.update((k, munchify_cycles(obj[k])) for k in iterkeys(obj))
    434         elif isinstance(obj, list):
    435             partial.extend(munchify_cycles(item) for item in obj)

~/.pyenv/versions/amical/lib/python3.7/site-packages/six.py in iterkeys(d, **kw)
    597 if PY3:
    598     def iterkeys(d, **kw):
--> 599         return iter(d.keys(**kw))
    600
    601     def itervalues(d, **kw):

AttributeError: '_HeaderCommentaryCards' object has no attribute 'keys'

Maybe this is an issue with how the _HeaderCommentaryCards class is defined in astropy (see here) ? The solution could be to file an issue upstream with astropy to see if this could be handled there. Otherwise, this could be handled by AMICAL if you are OK with simply removing the COMMENT and HISTORY keys when creating the oifits files. I implemented the latter fix locally and it works fine. There might also be a better solution. I'm not an expert with astropy fits handling or with munch.

Let me know what you think @DrSoulain. I can then either send a PR or fill an issue on the astropy github (or both, if you want to use the PR as a temporary fix).

example doesn't work

Hi Anthony,

Just cloned and ran example_analysis.py: hit this problem right away.


In [1]: run example_analysis.py
 | --- Start CANDID fitting --- :
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
~/code/AMICAL/doc/example_analysis.py in <module>
     40                     }
     41 
---> 42     fit1 = amical.candid_grid(inputdata, **param_candid, diam=20, doNotFit=[])
     43 
     44     cr_candid = amical.candid_cr_limit(

~/opt/anaconda3/lib/python3.8/site-packages/amical-1.2-py3.8.egg/amical/analysis/easy_candid.py in candid_grid(input_data, step, rmin, rmax, diam, obs, extra_error_cp, err_scale, extra_error_v2, instruments, doNotFit, ncore, verbose)
     36 
     37     cprint(' | --- Start CANDID fitting --- :', 'green')
---> 38     o = candid.Open(input_data, extra_error=extra_error_cp,
     39                     err_scale=err_scale, extra_error_v2=extra_error_v2,
     40                     instruments=instruments)

~/opt/anaconda3/lib/python3.8/site-packages/amical-1.2-py3.8.egg/amical/externals/candid/candid.py in __init__(self, filename, rmin, rmax, reducePoly, wlOffset, alpha, v2bias, instruments, largeCP, err_scale, extra_error, extra_error_v2)
   1150         if verbose:
   1151             print(' | compute aux data for companion injection')
-> 1152         self._compute_delta()
   1153         # self.estimateCorrSpecChannels()
   1154 

~/opt/anaconda3/lib/python3.8/site-packages/amical-1.2-py3.8.egg/amical/externals/candid/candid.py in _compute_delta(self)
   1606         allV2 = {'u': np.array([]), 'v': np.array([]), 'mjd': np.array([]),
   1607                  'wl': np.array([]), 'v2': np.array([])}
-> 1608         for r in filter(lambda x: x[0].split(';')[0] == 'v2', self._rawData):
   1609             allV2['u'] = np.append(allV2['u'], r[1].flatten())
   1610             allV2['v'] = np.append(allV2['v'], r[2].flatten())

AttributeError: 'Open' object has no attribute '_rawData'

BUG: incompatibility with corner 2.2.2

amical/tests/test_analysis.py::test_pymask_mcmc is currently failing (see https://github.com/SAIL-Labs/AMICAL/actions/runs/4643055517/jobs/8218550363) with the following error

ValueError: 'title_quantiles' must contain exactly three values; pass a length-3 list or array using the 'title_quantiles' argument

This is caused by an intentional change between corner==2.2.1 and corner==2.2.2 (just released). See dfm/corner.py#193

It's a little unsettling that a bugfix release breaks us (I would have expected a warning), but it also seems pretty easy to fix on our side so let's try that first.

DEP: drop tabulate ?

tabulate is used to pretty-print tabular data in a couple places, but it seems like a liability maintenance-wise (the repo has little activity and is seldom released). I wonder if we could use astropy.table instead and reduce the number of dependencies.

Enhancement: Enable more flexible background/sky subraction

Currently, background subtraction is done after the image is cropped and is limited to a ring around the PSF. In most cases this is fine, but as in #79, the small size of NIRISS images gave me some problems. When the fringe pattern is ~centered, the current method works fine, but if the image is off-centered, the background is restricted to a very small right, and is very near the fringe patterns. Here are a few possible changes that could help:

  1. Do sky subraction before cropping the image, allowing users to define a ring in the full image. This leaves a bigger choice of pixels to use, even if the background ring ends up being a circular arc instead of a full circle
  2. Allow users to provide either a boolean mask to use for sky subtraction, or to provide functions that can be used instead of the default sky_correction function. (I think a boolean mask would give a lot of flexibility and require relatively simple changes to AMICAL)

In the case of relatively small images (example NIRISS 80x80 images), I think the first point above, or something similar, is necessary to make the sky subtraction useful with NIRISS data and does not really affect other instruments. The second point would be nice to have as well, although less essential.

@DrSoulain, let me know what you think, I'd be happy to send a PR for this.

TYP: missing type-checking job

In response to a bug reported in #96:

One small issue that I noticed is that save=True does not work in these two analysis functions when the input data is a list of files because of os.path.basename(input_data).

Originally posted by @tomasstolker in #96 (comment)

I looked it up and I think this kind of error could have been avoided if we type-checked the code base, so I'll start setting up a type-checking job to help track down this class of bugs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.