Giter Site home page Giter Site logo

moble / scri Goto Github PK

View Code? Open in Web Editor NEW
18.0 4.0 20.0 801 KB

Python/numba code for manipulating time-dependent functions of spin-weighted spherical harmonics on future null infinity

License: MIT License

Python 82.78% TeX 0.17% Jupyter Notebook 17.06%
astronomy gravitational-waves python

scri's Introduction

Test and deploy Documentation Status PyPI Version Conda Version MIT License DOI

Scri

Python/numba code for manipulating time-dependent functions of spin-weighted spherical harmonics on future null infinity

Citing this code

If you use this code for academic work (I can't actually imagine any other use for it), please cite the latest version that you used in your publication. The DOI is:

Also please cite the papers for/by which it was produced:

Bibtex entries for these articles can be found here. It might also be nice of you to provide a link directly to this source code.

Quick start

Note that installation is not possible on Windows due to missing FFTW support.

Installation is as simple as

conda install -c conda-forge scri

or

python -m pip install scri

If the latter command complains about permissions, you're probably using your operating system's version of python, which can cause serious conflicts with essential OS functions. To avoid these issues, install conda/mamba. This will create a separate copy of python inside your home directory (avoiding issues with permissions) which you can update independently of your OS.

Then, in python, you can check to make sure installation worked with

import scri
w = scri.WaveformModes()

Note that scri can take a few seconds to import the first time as it compiles some code automatically. Here, w is an object to contain time and waveform data, as well as various related pieces of information -- though it is trivial in this case, because we haven't given it any data. For more information, see the docstrings of scri, scri.WaveformModes, etc.

Documentation

Tutorials and automatically generated API documentation are available on Read the Docs: scri.

Acknowledgments

Every change to this code is recompiled automatically, bundled into a conda package, and made available for download from anaconda.org.

The work of creating this code was supported in part by the Sherman Fairchild Foundation and by NSF Grants No. PHY-1306125 and AST-1333129.

scri's People

Contributors

10220 avatar akhairna avatar duetosymmetry avatar hrueter avatar keefemitman avatar markscheel avatar moble avatar raffienficiaud avatar vijayvarma392 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

scri's Issues

Errors in com_motion.py

Hi Mike,

I've found 2 coding errors in com_motion.py. One only shows up in diagnostic figures, but one affects the calculation of the average CoM motion.

The primary error is in line 110 where you set t_f. Because of the way array slicing works, given that you take the slices as ...[i_i:i_f], the final time should be set to t[i_f-1]. This is a small error that is easy to miss when the time steps are very small, but makes a big difference when the time steps are large.

The other error comes in at line 140. There, you use (t_j - t_i) when you should just use t_j. Again, if t_i is small, this makes little difference, but it's a big effect when t_i is large. The same error shows up again in line 238.

Cheers,
Greg Cook

waveform_grid and asymptotic_bondi_data transform differently

It seems that when transformed via a supertranslation, waveform_grid and asymptotic_bondi_data objects transform somewhat differently. I haven't dug through the code, but it looks the relationship between the transformations that map to the same frame are \alpha(waveform_grid)=4\alpha(asymptotic_bondi_data). I assume this is due to a convention choice? E.g., using a NP definition instead of a GHP definition for \eth operators or something.

Trying to understand how to perform supertranslation conjugation by a Lorentz transformation

Hi @moble (cc'ing @duetosymmetry),

So I'm following up about what I mentioned during our call today regarding composing BMS transformations.

Let's imagine I want to do the following:
$$\begin{align}B_{2}\cdot B_{1}&=\left(L_{2}\cdot S_{2}\right)\cdot\left(L_{1}\cdot S_{1}\right)\\&=L_{2}\cdot\left(L_{1}\cdot S'\cdot L_{1}^{-1}\right)\cdot L_{1}\cdot S_{1}\\&=\left(L_{2}\cdot L_{1}\right)\cdot\left(S'\cdot S_{1}\right)\quad\text{for}\quad S'\equiv L_{1}^{-1}S_{2}L_{1},\end{align}$$ where $B_{1}$ and $B_{2}$ are two BMS transformations.

For now consider just trying to compute $S'$. I would think doing something like this would suffice:

distorted_grid_rotors = boosted_grid(R, B, n_theta, n_phi)
_, _, one_over_k, _ = conformal_factors(B, distorted_grid_rotors)
        
S_prime = spinsfast.map2salm(
    sf.Grid(sf.Modes(S, spin_weight=0).real.evaluate(distorted_grid_rotors),
            spin_weight=0
            ).real * one_over_k[0],
    0,
    ell_max
)

, i.e., we

  • compute the distorted grid rotors and conformal factor,
  • evaluate $S$ on the distorted grid and apply the conformal factor,
  • extract the SWSH coefficients.

But for some reason this doesn't seem to work for me.

I would also think that we should be able to run the above code for each transformation one by one, i.e., running the code for the rotation and then running it again for the boost with the input $S$ being the output $S'$ from the rotation, but this doesn't seem to agree with supplying both transformations at once. I've attached some code demonstrating this.

Can you please try to help me understand what I'm doing wrong? 😅 Thanks!

Enhancement: Allow reuse/precomputation of interpolating splines in ModesTimeSeries (and abd)

Currently, calling interpolate, derivative, or antiderivative on an instance of ModesTimeSeries will construct a scipy.interpolate.CubicSpline interpolant, and then throw it out when the function returns. Building the interpolant is, AFAIK, most of the time in performing the interpolation/derivative/antiderivative. This means that calculations can't benefit from a speedup by reusing the interpolant.

I see two approaches to reusing the spline interpolant.

One, make it the user's problem; add an optional keyword argument spline to those three routines, and the user is responsible for passing in the correct interpolant (if they pass in garbage, they get back garbage). Along with this change also add a function to compute and return the interpolant.

Two, try to handle saving/reusing the interpolant inside the object. This would mean adding a member variable to store the interpolant, computing it the first time it's requested, and reusing it whenever needed.

The advantage of the second approach is that it will speedup any code that currently could gain a speedup by reuse.

The disadvantage of the second approach is that the spline may become invalid if somebody modifies the data of the object, and I don't know if it's possible to (cheaply) determine if the spline has become invalid.

Comments? @moble @keefemitman

Two failing tests with pytest --run_slow_tests

There are two failing slow_tests in the file test_waveform_grid.py. Both of these issues are in the function scri.sample_waveforms.single_mode_proportional_to_time_supertranslated():

The first error comes up in test_space_translation with a numba.core.errors.TypingError.

The second error comes up in test_hyper_translation and is a bit more straightforward. Here we find that we need mode data from the higher index Weyl scalars to perform a supertranslation of samples.single_mode_proportional_to_time() for spin weights corresponding to lower index Weyl scalars. This can be easily fixed by:

auxiliary_waveforms = {}
for i in range(s+2):
    auxiliary_waveforms[f"psi{4-i}_modes"] = samples.single_mode_proportional_to_time(s=i-2, ell=ell, m=m)
w_m1 = samples.single_mode_proportional_to_time(s=s, ell=ell, m=m).transform(
    supertranslation=supertranslation,
    **auxiliary_waveforms,
)

However the issue that now comes up is that the analytic supertranslation performed in scri.sample_waveforms.single_mode_proportional_to_time_supertranslated() does not take into account the contribution from higher order Weyl scalars.

Zenodo metadata seems wrong

Looking at the Zenodo version history, the "publication date" is set to March 2, 2022, for many versions that were uploaded since that time. The authors metadata field also does not seem to be updating. @keefemitman added code that is in the latest released version, but does not appear in the authors list there (and thus also the automatically-generated bibtex entry).

Enhancement: Add BMS group multiplication calculations

@keefemitman and I worked out what to do (requires being able to conjugate a supertranslation ɑ(θ,ɸ) by a Lorentz boost; then everything else will follow easily). The question is whether there should be a new type BMS added. I think it's natural to add such a type. However, current code stores BMS transformations as dicts rather than an object of type BMS, so to make a uniform interface, several APIs may change. Requesting comment from @moble

Missing objects in process_transformation_kwargs

For the BondiDataContainer branch, there are two fixes needed for the process_transformation_kwargs function in the waveform_grid.py file:

  • original_kwargs = kwargs.copy() should go outside this function.
  • Missing items:
    • spacetime_translation
    • ell_max_supertranslation
    • boost_velocity

Complex Waveform that has been sliced in (ell,m) can not be interpolated in time

Because scipy.interpolate functions only act on floats, not complex, WaveformBase.interpolate contains the following code to go through real views:

scri/waveform_base.py

Lines 807 to 810 in 72aaa1a

if self.data.dtype == np.dtype(complex):
for i in range(W.n_data_sets):
W.data_2d[:, i] = (splev(W.t, splrep(self.t, self.data_2d.view(dtype=float)[:, 2*i], s=0), der=0)
+ 1j*splev(W.t, splrep(self.t, self.data_2d.view(dtype=float)[:, 2*i+1], s=0), der=0))

However, if the Waveform was built by slicing a WaveformModes in (ell,m), then data_2d will not be C-contiguous. Then np.ndarray.view will raise a ValueError, as mentioned here.

Tests fail due to pytest deprecating direct fixture call

Directly calling a fixture has been deprecated in pytest. Thus any tests that call *_waveform fixtures will fail. It seems like the fix is either to just remove all arguments from *_waveform or get really clever with the pytest parametrizations.

Strange test failure in Azure Pipeline.

The test test_supertranslation_inverses in test_waveform_grid.py still fails in the Azure pipeline on github, though I cannot reproduce this failure on my own machince with a clean install of scri. Regardless of how I install scri locally, I cannot reproduce this failure.

Spline argument bbox really doesn't do what you'd think

Despite what any reasonable person would think from the documentation the bbox argument to InterpolatedUnivariateSpline actually doesn't do what is needed here:

scri/waveform_modes.py

Lines 403 to 407 in 1ed24d9

integrand_spline_r = IUS(times, integrand.real, bbox=[t0, t1])
integrand_spline_i = IUS(times, integrand.imag, bbox=[t0, t1])
return (integrand_spline_r.integral(t0, t1)
+ 1.j * integrand_spline_i.integral(t0, t1) )

It sets the limits of the knots, or something like that. In particular, the bbox limits actually have to be at or outside the limits of times, or you get an error.

The new quaternion.definite_integral function should actually handle this whole chunk correctly with something like

    return definite_integral(integrand, times, t0, t1)

Note that I added logic to the core function that just slices integrand and times to focus on the requested range of times — which I believe was the motivation for using bbox here (and could save time in some cases by not fitting a spline to unused data). But because splines can have weird transients at the beginning and end, I included additional padding of up to 10 unused points on each side where possible, which should be overkill.

@duetosymmetry I'm opening this just so I don't forget about it. That function from quaternion isn't fully deployed yet, but if you get a chance to try out my proposed fix, let me know how it goes.

Improved documentation?

I'd like to use scri to extrapolate finite radii waveforms, but from the ReadTheDocs it's not clear how to actually do that with scri. It would be super useful if there were a set of tutorials or something to show how to use scri for such tasks.

Should sample_waveforms.py:modes_constructor be generalized so that storage destination can be passed?

Right now, sample_waveforms.py:modes_constructor always allocates storage for a new WaveformModes object. A user may desire to set up storage in advance and have modes_constructor write into that storage, then create the WaveformModes object using that storage, to avoid allocation if it is not needed. However, this would require a change to the signature for data_functor so that it can accept the destination as well.

Should ABD objects optimize for when some fields are omitted?

We may be in the situation where we use and abd object to store only strain, or just strain and \psi_i through \psi_4 with some smallest value of i. In this case, there are a lot of calculations in e.g. AsymptoticBondiData.transform() that could be skipped. For example, in the lines

# ψ0(u, θ', ϕ') exp(2iλ)
ψ0 = sf.Grid(self.psi0.evaluate(distorted_grid_rotors), spin_weight=2)
# ψ1(u, θ', ϕ') exp(iλ)
ψ1 = sf.Grid(self.psi1.evaluate(distorted_grid_rotors), spin_weight=1)
# ψ2(u, θ', ϕ')
ψ2 = sf.Grid(self.psi2.evaluate(distorted_grid_rotors), spin_weight=0)
# ψ3(u, θ', ϕ') exp(-1iλ)
ψ3 = sf.Grid(self.psi3.evaluate(distorted_grid_rotors), spin_weight=-1)
# ψ4(u, θ', ϕ') exp(-2iλ)
ψ4 = sf.Grid(self.psi4.evaluate(distorted_grid_rotors), spin_weight=-2)
# σ(u, θ', ϕ') exp(2iλ)
σ = sf.Grid(self.sigma.evaluate(distorted_grid_rotors), spin_weight=2)

we can simply skip all the fields which are not stored; and similarly in the lines
fprime_of_timenaught_directionprime = np.empty((6, self.n_times, n_theta, n_phi), dtype=complex)
# ψ0'(u, θ', ϕ')
fprime_temp = ψ4.copy()
fprime_temp *= ðuprime_over_k
fprime_temp += -4 * ψ3
fprime_temp *= ðuprime_over_k
fprime_temp += 6 * ψ2
fprime_temp *= ðuprime_over_k
fprime_temp += -4 * ψ1
fprime_temp *= ðuprime_over_k
fprime_temp += ψ0
fprime_temp *= one_over_k_cubed
fprime_of_timenaught_directionprime[0] = fprime_temp
# ψ1'(u, θ', ϕ')
fprime_temp = -ψ4
fprime_temp *= ðuprime_over_k
fprime_temp += 3 * ψ3
fprime_temp *= ðuprime_over_k
fprime_temp += -3 * ψ2
fprime_temp *= ðuprime_over_k
fprime_temp += ψ1
fprime_temp *= one_over_k_cubed
fprime_of_timenaught_directionprime[1] = fprime_temp
# ψ2'(u, θ', ϕ')
fprime_temp = ψ4.copy()
fprime_temp *= ðuprime_over_k
fprime_temp += -2 * ψ3
fprime_temp *= ðuprime_over_k
fprime_temp += ψ2
fprime_temp *= one_over_k_cubed
fprime_of_timenaught_directionprime[2] = fprime_temp
# ψ3'(u, θ', ϕ')
fprime_temp = -ψ4
fprime_temp *= ðuprime_over_k
fprime_temp += ψ3
fprime_temp *= one_over_k_cubed
fprime_of_timenaught_directionprime[3] = fprime_temp
# ψ4'(u, θ', ϕ')
fprime_temp = ψ4.copy()
fprime_temp *= one_over_k_cubed
fprime_of_timenaught_directionprime[4] = fprime_temp
# σ'(u, θ', ϕ')
fprime_temp = σ.copy()
fprime_temp -= ððα
fprime_temp *= one_over_k
fprime_of_timenaught_directionprime[5] = fprime_temp

we can skip the calculations for fields \psi_j where j<i, i representing the smallest subscript of Weyl scalar we keep, as above.

If I remember correctly, @akhadse has implemented this as a speedup. Was that right, Akshay?

@moble do you think this is a feature worth having for ABD objects?

remove_avg_com_motion requires path_to_waveform_h5 option

remove_avg_com_motion makes you provide path_to_waveform_h5 even if you tell it not to read that file (by passing in a waveform object w_m) and even if that file doesn't exist. That actual h5 file doesn't need to exist, but path_to_waveform_h5 must have the form 'something.h5/group' or '/full/path/something.h5/group', which it will always parse and it will throw an error if it cannot parse, because it uses the /full/path and group pieces later on in the code. If you don't provide path_to_horizons_h5 or path_to_matter_h5 arguments, then it will look for Horizons.h5 and Matter.h5 inside '/full/path' (or inside cwd if the given path begins with 'something.h5'). Also, both '/full/path' and 'group' will be used for output of figures and filenames of figure files if plotting is turned on.

Note that if your Horizons.h5 happens to be in cwd, then the default value of path_to_waveform_h5 will work correctly if you don't specify path_to_horizons_h5. (but if you turn plotting on, the plot filename will contain 'Extrapolated_N2' which might be misleading).

It would be less confusing if the options were done differently.

Perhaps path_to_horizons_h5 could be the required option, and then internally the variable directory would come from path_to_horizons_h5. Then path_to_waveform_h5 could be truly optional and have a default of None. The internal variable subdir could have its own option (which the user would be required to specify if path_to_waveform_h5 is None and if plotting is turned on).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.