compwa / compwa.github.io Goto Github PK
View Code? Open in Web Editor NEWSource code for the ComPWA Organization website
Home Page: https://compwa.github.io
License: GNU General Public License v3.0
Source code for the ComPWA Organization website
Home Page: https://compwa.github.io
License: GNU General Public License v3.0
I did not find a way to do Concatenation in 4-momentum objects from vector
package directly
Here is the temporary solution:
Convert to tuple of numpy
Concatenate via numpy for the tuple of numpy
Convert back to tuple of vector
objects
Just for the record.
Maybe we will be finding a better solution soon(?)
Currently, the rule E402
(module-import-not-at-top-of-file
) is deactivated for notebooks.
compwa.github.io/pyproject.toml
Line 256 in bde4df9
We should activate this for notebooks and switch off that rule for the relevant notebooks through per-file-ignores
.
Currently, all notebooks are run every commit in order to generate output cells on the web pages. This has a few major disadvantages:
Instead, we could/should:
!pip install ...
).It takes more than ten minutes to run the Julia kernal notebook (TR-019). This is not really a problem locally or on GitHub Actions, because it can reuse the caches of jupyter-cache
. RTD cannot reuse its cache, so it's better to freeze the notebook.
See also #207 (comment) about #201 though...
The pin_requirements.py
is currently 'updated' in other repositories through pre-commit. This just means it's copied into the .constraints
folder. Cleaner would be to publish this script as a GitHub Action, because it's only run on GitHub Actions.
QRules, AmpForm, TensorWaves, and PWA Pages have pinned requirements, which makes it easier to spot problems introduced by requirement upgrades. This would for instance be helpful in #49.
Pinned requirements also speed up Conda+pip, which improves the local DX and will speed up Binder as well. Compare for instance:
https://github.com/ComPWA/compwa-org/actions/runs/1195243261 (old: 3m14)
https://github.com/ComPWA/compwa-org/actions/runs/1254008065 (new: 2m35)
Allow running a selection of notebooks in the Run all notebooks job. See here for how to do it with workflow_dispatch
arguments.
It would be useful to have an overview of which PRs affected a specific TR under compwa.github.io/reports.
sphinx-git
impementation so that the env.docname
is inserted.env.docname
, but it seems not possible to use recursive substitutions.See https://mybinder.org/v2/gh/ComPWA/compwa-org/b8340de?urlpath=lab/tree/docs/report/000.ipynb (for latest commit b8340de)
It currently takes a very long time to build a Binder image when launching a notebook. This load time might be reduced if installing only the doc requirements instead of all dev requirements.
ComPWA/tensorwaves#359 switched from pytest-notebook top nbmake. This has several advantages:
tox -e nb
)attrs
can be upgraded to >21.1 (pytest-notebook sets limit <21), which works much better with pyright and VSCodeComPWA (C++) and TensorWaves do their unbinned negative log likelihood fit with pure functions as opposed to working with Probability Density Functions (PDFs). The normalization in each fit step therefore takes place in the estimator and not in the function. Fit fractions are then computed by integrating at the end, not by reading off coefficients.
Some questions to answer:
Write the Amplitude analysis example in the form of a TR and do the fit and data generation with zfit (see ComPWA/tensorwaves#258). This might serve as feedback to zfit and possible evolve into a benchmark comparison.
Note that such a notebook may also evolve into ComPWA/.github#6.
The 3D interactive plots of the Riemann sheets made using plotly
are not being displayed on RTD, even though they are shown in Jupyter Lab.
The metadata of Zenodo DOIs currently need to be edited by hand after a release, because (for instance) Zenodo does not extract the package description from README.md
.
.zenodo.json
file. For an example, see here (requires ComPWA membership), and then tab "Metadata"."version"
can be left out. Zenodo already guesses this correctly from Git, so it does not make sense to also embed the version in .zenodo.json
.check-dev-files
pre-commit hook so that the .zenodo.json
content stays up to date with the rest of the packageCollection of suggestion for the Help developing pages.
TR-013 by #111 differs from 10.1155/2020/6674595 in two ways:
formalism="canonical-helicity"
. This makes it possible to generate all LS-combinations and generate the coefficients from Table 1.From #139 (comment)
TR-017 works out a parametrization in polar coordinates. It's easy to visualize this with Julia (#139 (comment)), but it would be nice if this parametrization is also visualized in the same (Python) TR.
Publish QRules, AmpForm, and TensorWaves on Zenodo. See:
https://guides.github.com/activities/citable-code
Follow-up to ComPWA/expertsystem#68
Some ideas:
setup.py
) in syncFor more ideas, see comments under ComPWA/expertsystem#68
Currently, kinematic variables like helicity angles are computed in AmpForm with NumPy only. This is clumsy, because AmpForm should ideally only produce mathematical expressions. The only reason why this NumPy implementation is in AmpForm is that TensorWaves (which implements the back-ends) is kept as 'ignorant' about physics as possible.
The difficulty with expression kinematic variable computation with SymPy lies in the fact that we require Einstein summation, see e.g. implementation here:
https://github.com/ComPWA/ampform/blob/f38179483ac20664584014d9eb659866966681c2/src/ampform/data.py#L209-L213
This is difficult to be done with SymPy in such a way that the lambdified expression has the same form (at least when using NumPy as a back-end). Some implementation attempts were made in (local) branches for the AmpForm repo. These should be documented in a TR. They can also be extended with a similar solution presented in ComPWA/ampform#115 (which allows defining custom lambdification methods for certain nodes in the expression tree).
As the ComPWA repositories are developed for an academic environment, they are expected to evolve continuously. We therefore need to focus more on the developer than the user (one wonders whether there should be a distinction).
In an interactive set-up like that that, a more extensive contribute page would be a valuable resource, not only to standardise contribute conventions (it's current purpose), but also to have an inventory of what developer tools are out there and how to use them (didactic purpose).
Some examples:
Of course, we don't need to write tutorials that are already out there: bundling the essentials and referring to helpful resources is already an improvement.
See also:
To-do list:
pylint
, flake8
, black
, mypy
, and how our coding conventions are automatically enforced through their configs (ComPWA/tensorwaves#48 (comment))nbextension
Install the sphinx-comments
extension to allow commenting with Hypothesis.is and utterances.es.
The searchable data table on the Technical Reports overview does not work anymore. The script is still there, but it does not convert the Markdown table created by _list_technical_reports.py
into a data table.
Some hooks like pin-nb-requirements
are only used by ComPWA/compwa.github.io.
the line
%pip install -q ampform==0.14.0 qrules==0.9.7 sympy==1.10.1 tensorwaves[jax,pwa]==0.4.5
causes the error
Note: you may need to restart the kernel to use updated packages.
ERROR: Could not find a version that satisfies the requirement jaxlib; extra == "jax" (from tensorwaves[jax,pwa]) (from versions: none)
ERROR: No matching distribution found for jaxlib; extra == "jax"
once I remove jax
from the tensorwaves
options, it works.
I am on the e73542d
commit of polarization-sensitivity
branch running the notebook 017.ipynb
Bug resulted on the following system:
conda env create
from the project folderDeveloper experience:
expertsystem
and tensorwaves
. First takes incorporates the physics and builds an amplitude model. Tensorwaves
can use amplitude models and perform fits, but does not include any physics logic.User experience:
Note: ComPWA rather sacrifices functionality for design and developer experience related developments. The flexible branching model aims to make up for the functionality loss.
Would be nice to be able to experiment with Julia in the Technical Reports, so that the project is not necessarily tied to Python.
Some requirements:
🎉 See result in TR-019.
See existing ZenHub description. Should be replaced with the new GitHub Issues.
The ci-tests.yml
and notebooks.yml
Actions are currently not stable:
https://github.com/ComPWA/compwa-org/actions/runs/1881099598
The reason is that each notebook installs its own pinned dependencies, which somehow results in a ContextualVersionConflict
. Ideally, these jobs should run over each notebook, install the dependencies, and run them without problems.
TR-016 investigates how to implement the lambdification for a symbolic integral over vectorized, complex-valued input. It did not yet investigate the use of scipy.integrate.quad_vec()
, which seems to work well with lambdas that take array intances from outside and can therefore be put inside generated lambdify code.
Also note the version of quadpy
that is currently pinned in TR-016 is not available anymore (something with licences). It may therefore be worth entirely rewriting the report so that it only uses scipy
.
Currently, TRs have a pip install
statement where the packages that are required for the notebook itself. The packages are pinned to a specific version in an attempt to make the notebook reproducible, but indirect dependencies are not pinned. This results in bugs like #272.
Unfortunately, Jupyter notebooks do not support launching sub-kernels. That would be the ideal solution: you have a main developer environment for the repository that you can for instance use to start Jupyter Lab, and then each notebook launches a subkernel with it's own specific set of dependencies.
The closest we can get to an isolated kernel is by doing a pip install with --prefix
and then using sys.path.insert
to import from that directory. You can create a constraint file for all indirect dependencies with pip compile
(or faster: uv pip compile
). Providing a constraint file for each notebook then makes each technical report reproducible.
Disadvantages:
pip compile
the constraint files. Is that done through the notebook as well? How to provide a mechanism where the writer of the report can specify which direct dependencies are required and then compile a constraint file from that?Original post
The problem seems to be caused by
sphinx-needs
, which we use to structure and interlink the technical reports (e.g. the table at compwa-org.rtfd.io/reports.html). Once thesphinx_needs
extension has been disabled (564e827), Plotly figures are visible again (removing anyneed
orspec
directives from the notebook also does not fix the problem).An alternative for organizing report notebooks may be
sphinx-gallery
, but this does not seem to work well with MyST-NB. Ideally, we want to have something like learn.astropy.org, where you can search for notebooks with filters etc.Some alternatives:
- Write some script that creates a DataTable from the TR notebooks, just like EBP's feature vote.
ablog.rtfd.io
No longer actively maintainedlearn.astropy.org
Looks best, but we would have to Gatsby, a whole different ecosystem than Sphinx
Originally posted by @redeboer in #206 (comment)
TR-000 run in Colab is not successful due to the following issues.
See above
No
None
No response
ComPWA packages liste on PyPI (https://pypi.org/search/?q=compwa) do not have a description
(only a long_description
).
The pages for this repository are currently hosted on RTD. Its URL is a bit ugly though (compwa-org.rtfd.io), especially since the original ComPWA C++ project has been archived and ComPWA is to be considered an organization anyway.
Since the pages for compwa-org on RTD are not versioned, we could host them on GitHub Pages, particularly on the namespace for the ComPWA GitHub organization (compwa.github.io). That URL currently hosts pycompwa
's documentation, but that could be moved to compwa.github.io/pycompwa. The only thing that's lost is the ability to have PR previews, but that could be kept by just letting compwa-org.rtfd.io redirect to compwa.github.io (see RTD redirects).
This repository has some peculiarities that not explained by the general develop page. Some things that need explaining (potentially on the general page as well):
environment.yml
do, and where does it fetch (optional) pip dependencies and constraintspip install package==0.x.x
statements neededShould be done before #156
Some left-overs from the todo-list of #278.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.