Giter Site home page Giter Site logo

eduardrusu / zmstarpdf Goto Github PK

View Code? Open in Web Editor NEW
0.0 4.0 0.0 55.07 MB

Gravitational lens environment modeling using Photo-z and Mstar PDFs with various systematics

Python 85.47% Shell 14.52% C 0.01%
gravitational-lensing photometric-redshifts external-convergence-and-shear

zmstarpdf's Introduction

zMstarPDF

Update

At present, this repository is no longer maintained. Most, although not all of its functionality has been ported over to cosmap and lenskappa, analysis engines which are more user-friendly, better documented, and are expected to be maintained in the future.

This repository has been used in the preparation of the following papers:

Original project description

Calculating joint photo-z and Mstar PDFs, while coping with various systematics.

Motivated primarily by understanding massive structures along the lines of sight to time delay strong gravitational lenses so that we can attempt to make accurate time delay distance measurements, we are trying to infer photometric redshifts simultaneously (or at least self-consistently) with stellar masses, from optical and near infrared survey photometry.

Our main test data are the 5 H0LiCOW lens fields, observed in ugriJHK as well as 4 IRAC bands. We use a weighted counts approach to translate measured over/under-densities in these fields, with respect to a large calibration survey, into a probability distribution for the external convergence (see Rusu et al, 2017 for details). Our calibration data of choice are the CFHTLenS object catalogs, generated with SExtractor, with photo-zs and stellar mass esstimates from BPZ and LePhare, respectively. We have two options for inferring z and Mstar from these datasets:

  • Follow the same procedure as the CFHTLenS team, so that we can simply re-use their data products. This is the approach we used in Rusu et al, 2017.
  • Infer z and Mstar from the CFHTLS and H0LiCOW photometry afresh, generating MCMC samples from Pr(z,Mstar|data) directly. This is possible using the stellarpops code (Auger et al 2010).

This repository contains scripts and notes, as well as code used in our investigations of these options. In particular, it contains the complete code used for the analysis presented in Rusu et al, 2017. A summary of our results is available on this webpage.

People

  • Cristian Eduard Rusu (UC Davis)
  • Chris Fassnacht (UC Davis)
  • Phil Marshall (KIPAC)

Contacts, License etc.

This is work of astronomical research: while the contents of this repository are publically visible, they are Copyright 2015 the authors, and not available for re-use. Please cite (Rusu et al, 2017), if you need to refer to this work, and feel free to get in touch via this repo's issues.

zmstarpdf's People

Contributors

cdfassnacht avatar drphilmarshall avatar

Watchers

 avatar James Cloos avatar  avatar Cristian Eduard Rusu avatar

zmstarpdf's Issues

Histograms

In the python folder there are 5 histograms containing 16 weight ratios each. Each ratio is computed only for subfields that have more than 50% or 75% of their surface free of masks; the numbers on the histograms, for each of the W1-W4 subfields, are: peak of the histogram, average of the distribution, and median, respectively. For each weight ratio, I cut the distributions above a weight ratio of 10. This is mainly because when weighting by mass, mass^2 and mass^3, the tail of the distributions is very long and affects the statistics
- histograms marked with "orig" do not consider P(z) or P(Mstar), just the best-fit values
- those marked with "samp" consider a rudimentary P(z) and P(Mstar). I took the +/-68% confidence limits, and approximated the real distribution with a Gaussian of the appropriate sigma on each side. I then extracted 100 realizations of z and Mstar for each object (the final catalogue size is therefore 100 times the original)
- those marked with "i23" contain only objects up to i=23, the rest up to i=24
- the one marked with "noCFHTLENSsamp" consider the rudimentary P(z) and P(Mstar) only for the lens fields, but not for the calibration fields.
These are not the final histograms: for the lens fields, I used just a simple cut in class_star to separate stars and galaxies; also, for the calibration fields, I used z, Mstar, and their confidence levels from CFHTLens, therefore not computed in the same way with the lens fields

Conclusions at this stage (these may change when I recompute the histograms after fixing the issues above):
- "samp" compared to "orig" have wider distributions, as expected, and also shifted, which is expected due to the asymmetrical error bars; the shifts seem very large
- field W4 is no longer appears different from the others, as it did in the first histograms; might have been a bug
- 50% and 75% free surface fields are very similar
- i<23 and i<24 limits: the comparison is not very useful now because the star-galaxy classification is not reliable; the distributions are much broader for i<23, and I'm not sure why
- I worry that when we account for different systematics we will get large shifts between the average/median of the weight ratio distributions; the original idea is to use the size of the shifts as error bars for the average/median, but if these are too wide, they are not informative anymore
- we need to decide what weights to use in order to get kappa_ext from the Millenium simulations; Do we use the weights Greene et al. suggested? do we use the ones with smallest scatter when accounting for different systematics?

Organizing initial code

  • Check in basic scripts
  • Tidy crap away into the attic
  • File data and notes away in data and doc directories

Clarifying and focusing systematics plan

I started working through the plan on the wiki, trying to make a clearly defined list of bulleted systematics. When finalized, these might need labelling with numbers or letters, so they can be referred to in the "tests". Or, the tests could be inserted after the description - and then, for clarity, we could provide a simple table at the top of the page with links to the individual systematics further down. Either way, let's iterate!

Note the small additions I made, just trying to clarify the thinking - please carry on with this! The clearer we write about the issues now, the easier it will be being efficient with the tests!

Re-using LePhare CFHTLenS photo-zs?

Is it possible to reconstruct the joint PDF Pr(z,Mstar | photometry) from the CFHTLenS outputs? In an ideal world we'd be provided with not only Pr(z | photometry) but also Pr(Mstar | z,photometry) (perhaps on a 2D grid), such that the product gives us what we need. I think this is a good question for the producers of the catalogs: let's check their papers for the answer, and then contact them if we can't find it.

Alternatively, we have some compressed information (Mstar estimates with confidence intervals, plus a tabulated Pr(z|photometry)) that could perhaps be interpreted as above, to give a useful approximation. I'm not sure if this approximation will be good enough for our purposes: it might be. We must test it, if we have to go this route.

Stefan's Millenium Simulations

Initial thoughts after a first look through the files.

  • As far as I can tell, the convergence maps I have are only for the redshift of B1608.
  • According to the README, the galaxy models become incomplete for galaxies with stellar masses M_Stellar ~ 1e9 Msolar and below due to limited resolution of the simulation. So there are galaxies with M_Stellar < 1e9 Msolar in the catalogue, but they constitute an incomplete sample. Does that mean that we should through away the low mass galaxies from the lens fields and CFHTLens when computing the overdensities? These constitute quite a large fraction.
  • I updated the wiki with a simulated_z - z_BPZ plot. It seems the simulated galaxies do indeed have realistic colors. I could compute the photoz and stellar mass in the same manner I do for CFHTLens and the 4 lens fields (it will be very computationally intensive though), but the question is: do we want to? According to Stefan's README: For the lightcone reconstruction, people should ideally only use the sky positions and the magnitudes (all else could be considered cheating).

Contact CFHTLens authors

Need to email the authors of the CFHTLens papers on several issues:
- Find out what value of h they used
- See if we can get their BPZ and LePhare config files, and especially the interpolated templates they used
- CFHTLens catalogue includes photometry and extinction values; but it's unclear to me if the photometry has already been corrected for extinction, or it needs to be
- anything else?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.