Comments (6)
I think this would increase significantly the total numerical cost as you’d be evaluating locations that actually end up being rejected? The sequential ordering of vectors just determines the proposals, but the evaluation order is still Markovian.
The best way to test slow variants is usually to importance sample, that way you only need to evaluate the likelihood on final accepted (and possibly thinned) samples. Using importance sampling can also avoid Monte Carlo noise when comparing different theory calculations on the same samples.
from cobaya.
Hi
Thank you.
"The sequential ordering of vectors just determines the proposals, but the evaluation order is still Markovian." -> Not sure if I understood that. But in anycase, while it is true that IS can be quite useful, when testing shifts in scales you don't know if a new effect (such as including non-limber or RSD or baryonic effect) can shift things more than a sigma or two, IS can fail. We see that in DES when we include for example RSD and baryonic effects in specific scale cuts. Having a higher numerical cost but smaller convergence time is a good deal for us. But anycase, just a proposal based on a sampler I am testing that do that.
Best
Vivian
from cobaya.
That’s true, though if IS doe not work that tells you already there is a problem (result not numerically stable).
By markovian I meant the next step depends on the previous point. So while you can propose N points at once, typically it will accept one of the first few, at which point you have to throw away the calculations for the remaining points and generate a new set of proposals based at the new point. You can increase the step distance to decrease the acceptance rate so less are wasted, but overall efficiency will go down. Usually one is running a bunch of runs in parallel anyway, so overall it’s more efficient to run each run on a modest number of CPU’s and the different runs in parallel. ThIngs like nonLimber should parallelize well with openmp (eg calculate using Camb) on each mpi process.
from cobaya.
Hi both,
I am afraid I agree with Antony in that what you are proposing is likely not to increase performance significantly, and that you would have some trouble making it behave in a markovian way, and with both that IS can miss something new.
I remember we discussed at some point about internal caching of different parts of your calculation, which could very significantly improve performance now that Cobaya has manual parameter blocking. Did you get to implement that sort of caching where possible?
Another tip would be computing the initial covariant matrix from the Limber case (even if it's a little different, will be faster that guessing a full one from a slower non-Limber likelihood). (In terms of increasing acceptance rate, maybe emcee can also be a good approach? we can try to do a quick implementation)
Or even using PolyChord with a small number of live points and small n_repeats
, which would not provide a good approximation of the evidence, but the Monte Carlo sample would be as good as MCMC, and it would take better advantage of MPI parallelisation for slow likeihoods (since at least to me it looks like you should worry more about that at this point than about the dimensionality, but maybe I am wrong).
When is this particular project due for? I have something in the pipeline that may be useful for it, using machine learning), but it's a more long term approach. We can discuss privately about it, if you want.
from cobaya.
from cobaya.
Closing this for book-keeping reasons (there is nothing clearly defined to be done), but feel free to write us privately to discuss how to approach that problem.
from cobaya.
Related Issues (20)
- Odd covmat matching HOT 1
- Importance minimization HOT 1
- Access to zstar from theories and likelihoods HOT 2
- `WantTensors: true` in `extra_args` results in a segfault (when also using `external_primordial_pk: true` HOT 2
- Cobaya icon in dark mode HOT 3
- Error installing likelihood data HOT 6
- `oversample_thin: true` does not seem to reduce output HOT 6
- Backward Compatibility with python 3.9 HOT 4
- "The sum of logpriors in the sample is not consistent." when resuming chains HOT 3
- Something went wrong when looking for a covmat HOT 1
- Script invocation is broken on Python 3.9 HOT 4
- Python invocation not doing anything HOT 4
- FutureWarning
- Interpolation error creating a delta chi2 = 20 on DESI likelihood HOT 7
- Installing DESI data: could not be found error HOT 4
- cobaya-install cannot name '__obsolete__' from 'cobaya' (unknown location) HOT 4
- cobaya-install planck_2018_highl_plik.TTTEEE fails HOT 2
- bao.generic likelihood not working for the distances of type Dv_over_rs HOT 6
- Error with the .yaml file generated from cosmo generator HOT 3
- *ERROR* Requested fast/slow separation, but all parameters have the same speed HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cobaya.