Giter Site home page Giter Site logo

beast-fitting / beast Goto Github PK

View Code? Open in Web Editor NEW
22.0 10.0 34.0 6.94 MB

Bayesian Extinction And Stellar Tool

Home Page: http://beast.readthedocs.io

Makefile 0.02% Python 98.89% Cython 1.08%
astronomy stars dust fitting bayesian astrophysics hacktoberfest

beast's Introduction

BEAST

The Bayesian Extinction and Stellar Tool (BEAST) fits the ultraviolet to near-infrared photometric SEDs of stars to extract stellar and dust extinction parameters. The stellar parameters are age (t), mass (M), metallicity (Z), and distance (d). The dust extinction parameters are dust column (Av), average grain size (Rv), and mixing between type A and B extinction curves (fA).

The full details of the BEAST are provided by Gordon et al. (2016, ApJ, 826, 104).

Build Status/Checks

Documentation Status Test Status Test Coverage Status Codacy grade

Packaging

ascl:1908.013 Powered by Astropy

Documentation

Details of installing, running, and contributing to the BEAST are at <http://beast.readthedocs.io>.

ApJ paper arXiv paper

Contributors

BEAST contributors 2016 and before (BEAST paper authorship): Karl D. Gordon, Morgan Fouesneau, Heddy Arab, Kirill Tchernyshyov, Daniel R. Weisz, Julianne J. Dalcanton, Benjamin F. Williams, Eric F. Bell, Lucianna Bianchi, Martha Boyer, Yumi Choi, Andrew Dolphin, Leo Girardi, David W. Hogg, Jason S. Kalirai, Maria Kapala, Alexia R. Lewis, Hans-Walter Rix, Karin Sandstrom, and Evan D. Skillman

Direct code contributors (including new contributors since 2016): <https://github.com/BEAST-fitting/beast/graphs/contributors>

Attribution

Please cite Gordon et al. (2016, ApJ, 826, 104) if you find this code useful in your research. The BibTeX entry for the paper is:

@ARTICLE{2016ApJ...826..104G,
  author = {{Gordon}, K.~D. and {Fouesneau}, M. and {Arab}, H. and {Tchernyshyov}, K. and
      {Weisz}, D.~R. and {Dalcanton}, J.~J. and {Williams}, B.~F. and
      {Bell}, E.~F. and {Bianchi}, L. and {Boyer}, M. and {Choi}, Y. and
      {Dolphin}, A. and {Girardi}, L. and {Hogg}, D.~W. and {Kalirai}, J.~S. and
      {Kapala}, M. and {Lewis}, A.~R. and {Rix}, H.-W. and {Sandstrom}, K. and
      {Skillman}, E.~D.},
  title = "{The Panchromatic Hubble Andromeda Treasury. XV. The BEAST: Bayesian Extinction and Stellar Tool}",
  journal = {\apj},
  archivePrefix = "arXiv",
  eprint = {1606.06182},
  keywords = {dust, extinction, galaxies: individual: M31, methods: data analysis, methods: statistical, stars: fundamental parameters},
  year = 2016,
  month = aug,
  volume = 826,
  eid = {104},
  pages = {104},
  doi = {10.3847/0004-637X/826/2/104},
  adsurl = {http://adsabs.harvard.edu/abs/2016ApJ...826..104G},
  adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}

In Development!

This code is currently in active development.

Contributing

Please open a new issue or new pull request for bugs, feedback, or new features you would like to see. If there is an issue you would like to work on, please leave a comment and we will be happy to assist. New contributions and contributors are very welcome!

New to github or open source projects? If you are unsure about where to start or haven't used github before, please feel free to contact @karllark. Want more information about how to make a contribution? Take a look at the astropy contributing and developer documentation.

Feedback and feature requests? Is there something missing you would like to see? Please open an issue or send an email to @karllark. BEAST follows the Astropy Code of Conduct and strives to provide a welcoming community to all of our users and contributors.

We love contributions! beast is open source, built on open source, and we'd love to have you hang out in our community.

Imposter syndrome disclaimer: We want your help. No, really.

There may be a little voice inside your head that is telling you that you're not ready to be an open source contributor; that your skills aren't nearly good enough to contribute. What could you possibly offer a project like this one?

We assure you - the little voice in your head is wrong. If you can write code at all, you can contribute code to open source. Contributing to open source projects is a fantastic way to advance one's coding skills. Writing perfect code isn't the measure of a good developer (that would disqualify all of us!); it's trying to create something, making mistakes, and learning from those mistakes. That's how we all improve, and we are happy to help others learn.

Being an open source contributor doesn't just mean writing code, either. You can help out by writing documentation, tests, or even giving feedback about the project (and yes - that includes giving feedback about the contribution process). Some of these contributions may be the most valuable to the project as a whole, because you're coming to the project with fresh eyes, so you can see the errors and assumptions that seasoned contributors have glossed over.

This disclaimer was originally written by Adrienne Lowe for a PyCon talk, and was adapted by the BEAST based on its use in the README file for the MetPy project.

License

This project is Copyright (c) Karl Gordon and BEAST Team and licensed under the terms of the BSD 3-Clause license. This package is based upon the Astropy package template which is licensed under the BSD 3-clause licence. See the licenses folder for more information.

beast's People

Contributors

astronomeralex avatar bsipocz avatar christinawlindberg avatar cmurray-astro avatar eteq avatar galaxyumi avatar heddyarab avatar jduval82 avatar jwuphysics avatar kapala avatar karinsandstrom avatar karllark avatar ktchrn avatar lcjohnso avatar lea-hagen avatar marthaboyer avatar mdecleir avatar meredith-durbin avatar mfouesneau avatar ogtelford avatar petiay avatar rubab1 avatar s-goldman avatar stargrazer82301 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

beast's Issues

Pytables support from v2 to v3

Pytables is now in version 3 and the API had changes that break the existing BEAST code. The main change is that many of the calls have slightly different names.

Setup basic documentation

Include basic information and API based on code docstrings.
Need to include how to install - including external submodules as needed.
Also, how to download the data files needed.

Metallicity initialization

At the moment, Z has 4 not uniformly spaced initial arbitrary values z = [0.03, 0.019, 0.008, 0.004] in datamodel.py. We might want to change that, and allow to set Z in the same manner as the other parameters: [min, max, step]

Setup BEAST for building

Better way to distribute for those not desiring to contribute to coding the BEAST, they just want to run it.
Distribute via pip, etc., not via github.

Add example of XSEDE run

Need to add the code required to run on large computers (e.g., XSEDE computers).
Useful for how to run parallel fits with the BEAST efficiently.

Add physical, non-interacting (and apparent?) binaries to stellar grid

Adding binaries to the stellar grid would significantly improve the fidelity of the model given than many (50%?) of point sources in galaxies are binaries. Currently, only single stars are modeled.

There are two types of binaries: physical and apparent.

Physical binaries are those that are coeval and orbiting each other. The will have the same age. The physical binaries can be split into two categories: non-interacting and interacting. The non-interacting binaries could be modeled as two single stars with the same age using the single star stellar evolution tracks. The interacting binaries would have to be modeled using stellar evolution codes that explicitly deal with the interaction. That sounds hard.

Apparent binaries are those that are not physically associated, but happen to show up along the same line-of-sight (unresolved).

Adding physical, non-interacting binaries should be the most straightforward. Minimizing the new grids points could be done by applying a filter that only adds the binary model to the grid if the binary SED at any wavelength in a defined wavelength range is different by some TBD amount (1%?) from the SED of just the most luminous source alone. The two stars in the binary would be constrained to have the same age.

Of course, the same code could be used for apparent binaries without the constraint that two sources having the same age. The number of apparent binary SEDs that would much greater than for the physical, non-interacting sources, but that is just a matter of computer memory and computation time.

Reorg for Observation Model

Need to do this at the same time as testing the code works.
Do in pieces to make sure the code will still work.

Unclosed noisemodel files

When running phat_small example, the following files are left unclosed:
beast_example_phat/beast_example_phat_noisemodel.hd5
beast_example_phat/beast_example_phat_noisemodel_trim.grid.hd5

Is there a missing close statement somewhere?

Simplify the Example Case

Good feedback at BEAST hack week that the example should ideally be a single file (datamodel.py) and there should be "generic" code to run the beast.

Remove all project -> change to examples

Projects should be kept in individual user repositories, not the main BEAST repository.
Much cleaner code.
Examples should be provided.
The "small" PHAT example is ideal and traceable to the BEAST paper.

Adjustable distance prior

There is no distance prior. Issue #38 will add a way to have distance as a fit parameter. But with a flat prior. We should have an adjustable distance prior.

Better would be to have the form and necessary variables set in the datamodel.py file or something else. Something else may be better for eventual use by the megabeast.

Two motivations.

Provide flexibility for the input of the distance prior for BEAST runs
Provide flexibility for the eventual use of the BEAST distance model by the MegaBEAST

Unit support via astropy units

Astropy has a units package. Currently the BEAST uses ezunits - an external package. It would be easier to maintain if we used the astropy units package given it is supported. Not clear how much we are currently even using units in the BEAST. It would be good to use them I am thinking, but need to think how to do this and how it would benefit the BEAST. An alternative would be to do away with unit support in the BEAST.

Remove some large files from commit history

Need to remove some large (binary) files from the commit history. Mistakenly added and now they slow down the download/upload of all branches. But only some binary files need deleting. Some should be kept.

Modify spec file format to remove h5py incompatible datatype

There is a datatype in the spec hdf5 format files (full spectra on the BEAST spectral grid) that is not supported by h5py.
This means that the regression check is currently not run on the spec files as this check uses h5py.
In addition, this locks the BEAST use of hdf5 files to pytables (at least in my understanding).

Split weights into grid and prior components

The weights in the BEAST are the combined weights needed to adjust for the non-uniform/ideal grid spacing and the input priors.
These should be separated into two components to clearly identify the two weight components. This will make it easier to support adjustable priors and eventual MegaBEAST use. And it will allow easier visualization of the priors.

Reorg code for Physics Model

Need to do this at the same time as testing the code works.
Do in pieces to make sure the code will still work.

Splitting of ASTs/observations by source density

The uncertainties due to crowding/confusion as derived from the ASTs can be strongly dependent on source density.
Need to add the scripts to split up the ASTs into different source density files.
Same for the observations.

Update for python 3

The beast works with python 2.7, but not python 3 or greater.
Update needed mainly to handle the print statements. Use 'from future import print, division' at a minimum.

Document the BEAST file formats

The BEAST file formats need documentation. This includes the output files with the fitting results (stats, pdf1d, and lnp) and the "internal" files (sed.grid, spec.grid, noisemodel, etc.). Very sparse docs in beast_grid_formats.rst. Related to #396 and #299.

Specifically:

  • isochrone grid
  • spectral grid
  • sed grid
  • observation model grid
  • fittings results (stats, pdf1d, lnp)
  • observed data file
  • AST inputs file
  • AST outputs file

Add distance to the BEAST

Adding distance is needed for work in the Magellanic Clouds.

The fastest way to do this from human standpoint is to do multiple BEAST runs with different distances on a uniform grid. The results can be used to regenerate all the standard outputs of the BEAST with the addition of the distance information included and distance as the 7th fit parameter.

Basically (given a uniform distance grid):

  • For all the 1D pPDFs (except distance), the 1D pPDFs for the the different distances can be added together to generate the 1D pPDFs including marginalization over distance.
    need to make sure we are not normalizaing the 1D pPDFs for output
  • For the the 1D pPDF for distance, this can be created by simply summing one of the 1D pPDFs for any other parameter for each distance as this provides the marginalization over all other parameters
  • For the stats file, all the parameters p50, p13, p87, exp, etc. need to be regenerated from the 1D pPDFS (just like they are done during the fitting)
  • For the max values, use the set of Pmax values for each distance and find the maximum one to set a the new Pmax value (and min chisqr, etc.)
  • For the sparse nD likelihoods, I think the solution is to merge all of them into one file, created a sparse likelihood that is n times larger than for a single distance run (more thinking about if this is correct from the sampling standpoint)

Code is needed to do all this semi-seamlessly to avoid human error (e.g., include distance grid in datamodel.py and provide scripts to setup the n distance BEAST grids and merge the results).

Use of SED grid 'keep' column?

The SED grid has 'keep' column. Is this column used anywhere? It has changed as revealed by the regress_check code. Cannot find where it is used. If not used, should we delete? If not used, should it be?

Adjustable age, mass, and metallicity priors

The age, mass, and metallicity priors are currently hard coded. They are:

  • age: constant star formation rate (linear age)
  • mass: a Kroupa IMF
  • metallicity: flat

Better would be to have the form and necessary variables set in the datamodel.py file or something else. Something else may be better for eventual use by the megabeast.

Two motivations.

  • Provide flexibility for the input of the stellar priors for BEAST runs
  • Provide flexibility for the eventual use of the BEAST stellar physicsmodel by the MegaBEAST

Reorg for Fitting

Need to do this at the same time as testing the code works.
Do in pieces to make sure the code will still work.

How to include non-python files in build

The documentation and testing is failing due to missing files in the build.
There must be someplace to designate how to copy these non .py files. Like the json files.

snippet of error*
File "/home/kgordon/Python_git/beast/build/lib.linux-x86_64-3.5/beast/physicsmodel/stars/ezpadova/parsec.py", line 35, in
with open(localpath + '/parsec.json') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/kgordon/Python_git/beast/build/lib.linux-x86_64-3.5/beast/physicsmodel/stars/ezpadova/parsec.json'
Sphinx Documentation subprocess failed with return code 1

HDF5 support via h5py instead of pytables

h5py support is easier to maintain and does not require the external hdf5 system library. Having to get this system library installed is one of the barriers to easy use of the BEAST. In addition, h5py may be more pythonic.

Adjustable A(V), R(V), and f_A priors

The A(V), R(V), and f_A priors are currently hard coded. They are:

A(V): flat
R(V) - f_A: triangular shape based on Gordon16 model - see BEAST paper
Better would be to have the form and necessary variables set in the datamodel.py file or something else. Something else may be better for eventual use by the megabeast.

Two motivations.

Provide flexibility for the input of the dust priors for BEAST runs
Provide flexibility for the eventual use of the BEAST dust physicsmodel by the MegaBEAST

Test code needed

Setup test code in the astropy test format. Existing test code could be used, but not sure.
Setup the continuous testing with travis as part of this.

Add evolutionary tracks as a 2nd option for generating the stellar grid

The stellar grid is currently based on isochrones. This the grid has uniform spacing in age, but non-uniform in mass. Isochrones are best for dealing with ensembles of stars, but not necessarily for individual stars. The issue for the BEAST is that the age spacing must be fine enough to sample the evolution of high mass stars and this severely over samples the evolution of low mass stars.

One solution is to change from using isochrones to evolutionary tracks. Stellar interior models create the evolutionary tracks which are then interpolated to isochrones. The tracks are given for a set of stars with a range of initial masses. Thus the stellar grid created from them will have a uniform spacing in mass, and a non-uniform spacing in age. This should create a significantly smaller grid with the same effective coverage of stellar parameters. Smaller grids mean faster run times or the ability to create grids with finer sampling of stellar parameters.

Basing the stellar grid on evolutionary tracks is definitely possible as this was what the original version of the BEAST did (in IDL).

Leo Girardi stated in an email to me that he has code for doing efficient/good interpolation on the stellar evolutionary tracks. Hopefully, this is in python.

Cleanup code

Need to really go through the code and clean it up. There is significant orphaned code that is no longer used due to changes in the design of the BEAST. Less code will make it easier to maintain.

Define API between BEAST modules

It would be useful to have a clear maintained definition of the API between the different main modules of the BEAST.
What does the physicsmodel need for input and output?
What does the observationmodel need for input and output?
What does the fitting input and output?

What does the Mega-BEAST need for input from the BEAST?

Propose a new BEAST output file format

The output of the BEAST fitting is split into three files.
A FITS binary table for the summary fitting statistics (best, p50, exp, etc. values).
A FITS image file with many extensions for the 1D pPDFs.
A HDF5 file for the sparse likelihoods.

Having things in 3 different files makes it more difficult to use the full BEAST output. And it means there could be a ordering mismatch between the different files.

A single for with all the information for each star in the same place would make things easier.
Given the full set of BEAST data for each star is heterogeneous, FITS files are not likely to provide a good answer. HDF5 or ASDF files are the two options I can think of. Maybe we support both. Or pick on. Useful reading:
http://adsabs.harvard.edu/abs/2015A%26C....12..240G

Table support from astropy instead of eztables

Astropy now includes a very nice interface to generic tables i/o and manipulation. Current the BEAST uses a mix of astropy tables and eztables. Better to just use one and astropy tables is supported by the community, thus no need to support it ourselves even as an external package.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.