Giter Site home page Giter Site logo

esa / torchquad Goto Github PK

View Code? Open in Web Editor NEW
175.0 10.0 36.0 1.93 MB

Numerical integration in arbitrary dimensions on the GPU using PyTorch / TF / JAX

Home Page: https://www.esa.int/gsp/ACT/open_source/torchquad/

License: GNU General Public License v3.0

Python 95.74% TeX 4.26%
python gpu pytorch torchquad numerical-integration multidimensional-integration integration automatic-differentiation high-performance-computing monte-carlo-integration vegas vegas-enhanced machine-learning

torchquad's Introduction

torchquad

Read the Docs (version) Tests GitHub last commit GitHub Conda (channel only) PyPI PyPI - Python Version

GitHub contributors GitHub issues GitHub pull requests Conda PyPI - Downloads JOSS


Logo

High-performance numerical integration on the GPU with PyTorch, JAX and Tensorflow
Explore the docs »

Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Goals
  3. Getting Started
  4. Usage
  5. Roadmap
  6. Contributing
  7. License
  8. FAQ
  9. Contact

About The Project

The torchquad module allows utilizing GPUs for efficient numerical integration with PyTorch and other numerical Python3 modules. The software is free to use and is designed for the machine learning community and research groups focusing on topics requiring high-dimensional integration.

Built With

This project is built with the following packages:

  • autoray, which means the implemented quadrature supports NumPy and can be used for machine learning with modules such as PyTorch, JAX and Tensorflow, where it is fully differentiable
  • conda, which will take care of all requirements for you

If torchquad proves useful to you, please consider citing the accompanying paper.

Goals

  • Supporting science: Multidimensional numerical integration is needed in many fields, such as physics (from particle physics to astrophysics), in applied finance, in medical statistics, and others. torchquad aims to assist research groups in such fields, as well as the general machine learning community.
  • Withstanding the curse of dimensionality: The curse of dimensionality makes deterministic methods in particular, but also stochastic ones, computationally expensive when the dimensionality increases. However, many integration methods are embarrassingly parallel, which means they can strongly benefit from GPU parallelization. The curse of dimensionality still applies but the improved scaling alleviates the computational impact.
  • Delivering a convenient and functional tool: torchquad is built with autoray, which means it is fully differentiable if the user chooses, for example, PyTorch as the numerical backend. Furthermore, the library of available and upcoming methods in torchquad offers high-effeciency integration for any need.

Getting Started

This is a brief guide for how to set up torchquad.

Prerequisites

We recommend using conda, especially if you want to utilize the GPU. With PyTorch it will automatically set up CUDA and the cudatoolkit for you, for example. Note that torchquad also works on the CPU; however, it is optimized for GPU usage. torchquad's GPU support is tested only on NVIDIA cards with CUDA. We are investigating future support for AMD cards through ROCm.

For a detailed list of required packages and packages for numerical backends, please refer to the conda environment files environment.yml and environment_all_backends.yml. torchquad has been tested with JAX 0.2.25, NumPy 1.19.5, PyTorch 1.10.0 and Tensorflow 2.7.0 on Linux; other versions of the backends should work as well but some may require additional setup on other platforms such as Windows.

Installation

The easiest way to install torchquad is simply to

conda install torchquad -c conda-forge

Alternatively, it is also possible to use

pip install torchquad

The PyTorch backend with CUDA support can be installed with

conda install "cudatoolkit>=11.1" "pytorch>=1.9=*cuda*" -c conda-forge -c pytorch

Note that since PyTorch is not yet on conda-forge for Windows, we have explicitly included it here using -c pytorch. Note also that installing PyTorch with pip may not set it up with CUDA support. Therefore, we recommend to use conda.

Here are installation instructions for other numerical backends:

conda install "tensorflow>=2.6.0=cuda*" -c conda-forge
pip install "jax[cuda]>=0.2.22" --find-links https://storage.googleapis.com/jax-releases/jax_cuda_releases.html # linux only
conda install "numpy>=1.19.5" -c conda-forge

More installation instructions for numerical backends can be found in environment_all_backends.yml and at the backend documentations, for example https://pytorch.org/get-started/locally/, https://github.com/google/jax/#installation and https://www.tensorflow.org/install/gpu, and often there are multiple ways to install them.

Test

After installing torchquad and PyTorch through conda or pip, users can test torchquad's correct installation with:

import torchquad
torchquad._deployment_test()

After cloning the repository, developers can check the functionality of torchquad by running the following command in the torchquad/tests directory:

pytest

Usage

This is a brief example how torchquad can be used to compute a simple integral with PyTorch. For a more thorough introduction please refer to the tutorial section in the documentation.

The full documentation can be found on readthedocs.

# To avoid copying things to GPU memory,
# ideally allocate everything in torch on the GPU
# and avoid non-torch function calls
import torch
from torchquad import MonteCarlo, set_up_backend

# Enable GPU support if available and set the floating point precision
set_up_backend("torch", data_type="float32")

# The function we want to integrate, in this example
# f(x0,x1) = sin(x0) + e^x1 for x0=[0,1] and x1=[-1,1]
# Note that the function needs to support multiple evaluations at once (first
# dimension of x here)
# Expected result here is ~3.2698
def some_function(x):
    return torch.sin(x[:, 0]) + torch.exp(x[:, 1])

# Declare an integrator;
# here we use the simple, stochastic Monte Carlo integration method
mc = MonteCarlo()

# Compute the function integral by sampling 10000 points over domain
integral_value = mc.integrate(
    some_function,
    dim=2,
    N=10000,
    integration_domain=[[0, 1], [-1, 1]],
    backend="torch",
)

To change the logger verbosity, set the TORCHQUAD_LOG_LEVEL environment variable; for example export TORCHQUAD_LOG_LEVEL=DEBUG.

You can find all available integrators here.

Roadmap

See the open issues for a list of proposed features (and known issues).

Performance

Using GPUs torchquad scales particularly well with integration methods that offer easy parallelization. For example, below you see error and runtime results for integrating the function f(x,y,z) = sin(x * (y+1)²) * (z+1) on a consumer-grade desktop PC.

Runtime results of the integration. Note the far superior scaling on the GPU (solid line) in comparison to the CPU (dashed and dotted) for both methods.

Convergence results of the integration. Note that Simpson quickly reaches floating point precision. Monte Carlo is not competitive here given the low dimensionality of the problem.

Contributing

The project is open to community contributions. Feel free to open an issue or write us an email if you would like to discuss a problem or idea first.

If you want to contribute, please

  1. Fork the project on GitHub.
  2. Get the most up-to-date code by following this quick guide for installing torchquad from source:
    1. Get miniconda or similar
    2. Clone the repo
    git clone https://github.com/esa/torchquad.git
    1. With the default configuration, all numerical backends with CUDA support are installed. If this should not happen, comment out unwanted packages in environment_all_backends.yml.
    2. Set up the environment. This creates a conda environment called torchquad and installs the required dependencies.
    conda env create -f environment_all_backends.yml
    conda activate torchquad

Once the installation is done, you are ready to contribute. Please note that PRs should be created from and into the develop branch. For each release the develop branch is merged into main.

  1. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  2. Commit your Changes (git commit -m 'Add some AmazingFeature')
  3. Push to the Branch (git push origin feature/AmazingFeature)
  4. Open a Pull Request on the develop branch, not main (NB: We autoformat every PR with black. Our GitHub actions may create additional commits on your PR for that reason.)

and we will have a look at your contribution as soon as we can.

Furthermore, please make sure that your PR passes all automated tests. Review will only happen after that. Only PRs created on the develop branch with all tests passing will be considered. The only exception to this rule is if you want to update the documentation in relation to the current release on conda / pip. In that case you may ask to merge directly into main.

License

Distributed under the GPL-3.0 License. See LICENSE for more information.

FAQ

  1. Q: Error enabling CUDA. cuda.is_available() returned False. CPU will be used.
    A: This error indicates that PyTorch could not find a CUDA-compatible GPU. Either you have no compatible GPU or the necessary CUDA requirements are missing. Using conda, you can install them with conda install cudatoolkit. For more detailed installation instructions, please refer to the PyTorch documentation.

Contact

Created by ESA's Advanced Concepts Team

  • Pablo Gómez - pablo.gomez at esa.int
  • Gabriele Meoni - gabriele.meoni at esa.int
  • Håvard Hem Toftevaag

Project Link: https://github.com/esa/torchquad

torchquad's People

Contributors

abnsy avatar astro-lee avatar danielkelshaw avatar elastufka avatar fhof avatar gabrielemeoni avatar gomezzz avatar htoftevaag avatar ilan-gold avatar martinbubel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torchquad's Issues

Add better test functions

Feature

Desired Behavior / Functionality

Current test functions are based only on polynomials and sinusoidal and exponential functions.

What Needs to Be Done

  • Find nice test functions (e.g. in VEGAS Enhanced paper )
  • Add them to integration_test_functions.py
  • Add them in integration_test_utils.py so they become part of the standard test sweet
  • If necessary adjust current tests in case they fail

How Can It Be Tested

  • Run pytest

torch dataloader crashed after `set_up_backend`

Issue

Problem Description

I'm using this torch integration for deep learning model training.
If I add this line set_up_backend("torch") or set_up_backend("torch", data_type="float32"), I get error at torch dataloader. I have no idea why this line make dataloader broken. Do you have any idea about this error?

Error message

  File "/home/wlsgur4011/torchlevy/test/data_loader_conflict.py", line 21, in test_data_loader_confliict
    for x in train_loader:
  File "/home/wlsgur4011/.anaconda3/envs/torchlevy/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
    data = self._next_data()
  File "/home/wlsgur4011/.anaconda3/envs/torchlevy/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 720, in _next_data
    index = self._next_index()  # may raise StopIteration
  File "/home/wlsgur4011/.anaconda3/envs/torchlevy/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 671, in _next_index
    return next(self._sampler_iter)  # may raise StopIteration
  File "/home/wlsgur4011/.anaconda3/envs/torchlevy/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 247, in __iter__
    for idx in self.sampler:
  File "/home/wlsgur4011/.anaconda3/envs/torchlevy/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 133, in __iter__
    yield from torch.randperm(n, generator=generator).tolist()[:self.num_samples % n]
RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'

Or can you give some explanation about set_up_backend("torch")? I cannot get any estimation how it works when I read the code. I found that logger.info("Active CUDA Device: GPU" + str(torch.cuda.current_device()))
this line influence gpu setting. If I remove this line, it works on cpu but not on gpu. I think it is just log code, and cannot understand how this code influences the gpu setting.

Thank you for reading this poor bug report.

Expected Behavior

What Needs to be Done

How Can It Be Tested or Reproduced

Google Colab TPU support

Feature

Desired Behavior / Functionality

Setting default device of torch to TPU should work, but it hangs

How Can It Be Tested

I have a not totally minimal example, which can be tested in google colab. If you you run it, it shows, that the TPU is set up correctly, but the integration hangs. If you interrupt you get :

tensor([1., 1., 3., 1., 1.], device='xla:1')
xla:1
(0, 1, 2) 0
(0, 2, 1) 1
(1, 0, 2) 1
(1, 2, 0) 0
(2, 0, 1) 0
(2, 1, 0) 1
[[0 1 2 3 4 5]
 [0 2 1 3 4 5]
 [1 0 2 3 4 5]
 [1 2 0 3 4 5]
 [2 0 1 3 4 5]
 [2 1 0 3 4 5]]
(6, 6)

---------------------------------------------------------------------------

KeyboardInterrupt                         Traceback (most recent call last)

[<ipython-input-4-2b4f7889adcb>](https://localhost:8080/#) in <cell line: 142>()
    140 
    141 
--> 142 plotwf(ppp)
    143 
    144 

4 frames

[/usr/local/lib/python3.10/dist-packages/torch/utils/_device.py](https://localhost:8080/#) in __torch_function__(self, func, types, args, kwargs)
     60         if func in _device_constructors() and kwargs.get('device') is None:
     61             kwargs['device'] = self.device
---> 62         return func(*args, **kwargs)
     63 
     64 # NB: This is directly called from C++ in torch/csrc/Device.cpp

KeyboardInterrupt:

In google colab there are two cells, the first installs TPU support for torch and the needed libs

!pip install cloud-tpu-client==0.10 torch==2.0.0 torchvision==0.15.1 https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp310-cp310-linux_x86_64.whl
!pip install noisyopt
!pip install torchquad

The second runs the program

# Python program to compute Hessian in PyTorch
# importing libraries
import matplotlib.pyplot as plt
import numpy as np
import torch
from torch.func import hessian
from torchquad import VEGAS, set_up_backend, set_precision
from torch import vmap
import time
from noisyopt import minimizeCompass
from itertools import permutations
from sympy.combinatorics.permutations import Permutation

N_Int_Points = 3000

# set_precision(data_type='float64', backend='torch')
import torch_xla.core.xla_model as xm
dev = xm.xla_device()
torch.set_default_device(dev)
set_up_backend("torch", data_type="float64", torch_enable_cuda=False)
# set_log_level("TRACE")


j = None
# j = torch.complex(torch.tensor(0, dtype=torch.float64), torch.tensor(1, dtype=torch.float64))

ppp = torch.tensor(
    [1.0, 1.0, 3.0, 1.0, 1.0]
    )
print(ppp)
print(ppp.device)

dist_nuclei = torch.tensor(ppp[0], dtype=torch.float64)
nNuclei = 3
nElectrons = 3
IntElectron = [[-dist_nuclei * (nNuclei//2) * 1.3, dist_nuclei * (nNuclei//2) * 1.3]]

m_Nuclei = 1836
m_Electron = 1
IntNuclei = [[-1.5, 1.5]]

nParticles = nElectrons + nNuclei
m = torch.tensor([m_Nuclei]*nNuclei + [m_Electron]*nElectrons)

offsets = torch.zeros(nParticles)  # at 0 there must be high spatial probability density for VEGAS integration to work
for i in range(nNuclei):
    offsets[i+nElectrons] = dist_nuclei * (i - nNuclei//2)

def CutRange(x, r):
    return torch.sigmoid(x+r)*(1-torch.sigmoid(x-r))


q = torch.tensor([-1]*nElectrons + [1]*nNuclei)


def V(dx):
    return torch.exp(-dx**2)


def Vpot(xinp):
    """Potential energy"""
    x = xinp + offsets
    x1 = x.reshape(-1, 1)
    x2 = x.reshape(1, - 1)
    dx = x1 - x2
    Vdx = q.reshape(-1, 1) * V(dx) * q.reshape(1, -1)
    Vdx = Vdx.triu(diagonal=1)
    return Vdx.sum()


def Epot(wf, x):
    return (torch.conj(wf(x)) * Vpot(x) * wf(x)).real


def H_single(wf, x):
    if j is None:
        gg = torch.func.grad(lambda x: wf(x).real)(x)
    else:
        gg = torch.complex(torch.func.grad(lambda x: wf(x).real)(x), torch.func.grad(lambda x: wf(x).imag)(x))
    v = 1/(2*m)  # from partial integration the minus sign already present
    gg = torch.sqrt(v) * gg
    return ((torch.dot(torch.conj(gg), gg) + Epot(wf, x)).real)


def H(wf, x):
    gg = vmap(lambda x: H_single(wf, x))(x)
    return gg


def testwf(ppp, x):
    # x = xx[tuple(perms[0]), ]  # allows summation over permutations later
    return torch.exp(-ppp[1]*x[nElectrons:]**2).prod(-1) * CutRange(x[:nElectrons], ppp[2] * nNuclei//2).prod(-1) * (ppp[4] * torch.sin(x[:nElectrons] * torch.pi / ppp[3])).prod(-1)


def Norm(wf, x):
    return (torch.conj(wf(x)) * wf(x)).real


vg = VEGAS()

for i in range(nNuclei):
    offsets[i+nElectrons] = ppp[0] * (i - nNuclei//2)

# create permutations

perms = []
perms_p = []
for i in permutations(list(range(nElectrons))):
    a = Permutation(list(i))
    print(i, a.parity())
    perms.append(list(i) + list(range(nElectrons, nParticles)))
    perms_p.append(a.parity())

perms = np.array(perms)
print(perms)
print(perms.shape)


def plotwf(ppp):
    pl_x = np.linspace(IntElectron[0][0].cpu(), IntElectron[0][1].cpu(), 100)
    pl_y = []
    pl_y = []

    plot_pos = 0
    for x in pl_x:
        def wf(x):
            return testwf(ppp, x)
        xinp = [0]*plot_pos + [x] + [0]*(nParticles-1-plot_pos)
        xinp = torch.from_numpy(np.array(xinp))
        if plot_pos < nElectrons:
            int_domain = [IntElectron[0]]*plot_pos + [[x, x+0.01]] + [IntElectron[0]]*(nElectrons-1-plot_pos) + [[-0.1, 0.1]]*nNuclei
        else:
            int_domain = [IntElectron[0]]*nElectrons + [[-0.1, 0.1]]*(plot_pos-nElectrons) + [[x, x+0.01]] + [[-0.1, 0.1]]*(nParticles - 1 - plot_pos)
        integral_value = vg.integrate(lambda y: vmap(lambda y: Norm(lambda x: testwf(ppp, x), y))(y), dim=nParticles, N=10000,  integration_domain=int_domain, max_iterations=20)
        pl_y.append(integral_value)
    pl_y = torch.tensor(pl_y).cpu().numpy()

    plt.plot(pl_x + offsets[plot_pos].cpu().numpy(), pl_y)
    plt.show()


plotwf(ppp)


def doIntegration(pinp):
    start = time.time()
    global offsets
    ppp = torch.tensor(pinp)
    for i in range(nNuclei):
        offsets[i+nElectrons] = ppp[0] * (i - nNuclei//2)
    IntElectron = [[-ppp[0] * (nNuclei//2) * 1.3, ppp[0] * (nNuclei//2) * 1.3]]
    IntNuclei = [[-1.7 / ppp[2], 1.7/ppp[2]]]
    Normvalue = integral_value = vg.integrate(lambda y: vmap(lambda y: Norm(lambda x: testwf(ppp, x), y))(y), dim=nParticles, N=N_Int_Points,  integration_domain=IntElectron*nElectrons+IntNuclei*nNuclei, max_iterations=30)
    integral_value = vg.integrate(lambda y: H(lambda x: testwf(ppp, x), y), dim=nParticles, N=N_Int_Points,  integration_domain=IntElectron*nElectrons+IntNuclei*nNuclei, max_iterations=30)
    retH = integral_value/Normvalue
    print("H", integral_value, ppp, retH, time.time() - start)
    return retH.cpu().numpy()


ret = minimizeCompass(doIntegration, x0=ppp.cpu().numpy(), deltainit=0.1, deltatol=0.01, paired=False, bounds=[[0.01, 5]]*ppp.shape[0], errorcontrol=True, funcNinit=10)

print(ret)

Suggestions for VEGAS algorithm changes

Feature

Desired Behavior / Functionality

It could be possible to change torchquad's VEGAS algorithm so that it converges a bit faster with common integrands.

  • A method could be added to to VEGASMap which splits each interval in the middle. When executed this would double the number of intervals while the mapping stays the same (until the next map update). With this it could be possible to warm up the VEGASMap first with a small number of intervals and points per iteration, then split the intervals and continue the warm-up with more points.
  • Currently every fifth iteration VEGAS may abort, or reset the collected results and increase the number of samples per iteration.
    This behaviour is from the VEGAS implementation on which torchquad's VEGAS is based on. The chi2 / 5.0 < 1.0 condition may be incorrect since according to the G. P. Lepage paper and tutorial, chi2 should be in the order of the number of iterations minus one, which corresponds to chi2 / 4.0 < 1.0.
  • If the VEGAS quadrature is executed sequentially many times with an integrand whose parameters change only slightly over time, it may sometimes be beneficial to re-use a previous VEGASMap and VEGASStratification. This situation may occur when VEGAS is used in a function which is optimised with stochastic gradient descent.

What Needs to Be Done

  • Implement the VEGASMap interval splitting in vegas_map.py and use it for the warm-up in vegas.py
  • Investigate if the condition on chi2 in vegas.py works well or should be changed
  • Change VEGAS so that it is possible to continue the integration with a new integrand but the same VEGASMap and VEGASStratification

Fix wrong return type in docstrings

Hi,
I would like to suggest that the docsrings of bascially all .integrate functions of the classes in torchquad.integration are updated as they are currently reading Returns: float: Integral value. However, the functions are actually returning an object of type torch.Tensor. Of course, this is neither a bug nor a big deal at all, but it results in the user getting warnings and complaints by the IDE/IntelliSense.

If you agree that this would be a good idea / quality of life update then I can offer to do the job of creating a Pull Request.

By the way: torchquad is a cool package! Thanks for your work!

Best regards
Martin

Add Gaussian quadrature methods

Feature

Desired Behavior / Functionality

Currently, torchquad only implements Newton-Cotes formulas as deterministic integration methods. A logical next step would be Gaussian quadrature methods.

What Needs to Be Done

  • Decide on integrators to implement (e.g. Gauss-Legendre
  • Implement integrator in integration folder, should inherit from BaseIntegrator
  • Add docs for it
  • Add tests for it, orient at Newton-Cotes tests

How Can It Be Tested

With new tests!

Evaluate many different integrands over the different domain

Feature

Desired Behavior / Functionality

Hey,

Thanks for developing this module. Recently I am trying to deal with an integration problem, which require to integrate many different integrands over different domain. For example, I have a set of function [f_1(t; a1, b1), f_2(t; a2, b2), ..., f_n(t; an, bn)], that means each function has the same form but just has difference in parameters, what I want to do is integrating f_1,..., f_n with each function has specific integrate domain (n). so the output would still a n-dimensional tensor, with each elements is the integration results of f_i

I have tracked the PR, and seems integrate on the same domain has been implement (#160), so I am wondering if this feature could be realized. If could, this tool could be applied on more scenarios.

Thanks,
Ken

Initial feedback from JOSS review

  1. There's a grammatical mistake here: https://github.com/esa/torchquad/blob/2824f1ddacdccadf3a880851f7ddf1bc1f878a36/torchquad/integration/base_integrator.py#L73external should be either "needs to match" or "needs to be match to". In either case, since integration_domain is referred by variable name, ndim would be better to be explictly mention as well
  1. The usage example is clear and it works, but it should include the enable_cuda() call explicitly so that the user knows, just reading the readme, how to use the GPU-capabilities of the library (which is one of the main points)
  1. From the readme and rtd, it seems to work only with CUDA/nvidia. Is that the case? It should be mentioned explicitly in the readme and docs. Note that pytorch offers also a rocm version so users might expect it to work with their AMD cards.

We have received initial feedback from JOSS reviewers which we should address.

A lot of warnings in the current test CI on develop

Issue

Problem Description

Currently tests throw a lot of warnings on develop branch

 =============================== warnings summary ===============================
torchquad/tests/boole_test.py: 4 warnings
torchquad/tests/gradient_test.py: 3 warnings
torchquad/tests/integrator_types_test.py: 4 warnings
torchquad/tests/monte_carlo_test.py: 4 warnings
torchquad/tests/simpson_test.py: 7 warnings
torchquad/tests/trapezoid_test.py: 4 warnings
  /home/runner/work/torchquad/torchquad/torchquad/tests/../integration/utils.py:255: UserWarning: DEPRECATION WARNING: In future versions of torchquad, an array-like object will be returned.
    warnings.warn(

torchquad/tests/boole_test.py::test_integrate_torch
  /home/runner/micromamba-root/envs/torchquad/lib/python3.10/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /home/conda/feedstock_root/build_artifacts/pytorch-recipe_1675740247391/work/aten/src/ATen/native/TensorShape.cpp:3190.)
    return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]

torchquad/tests/monte_carlo_test.py::test_integrate_jax
  /home/runner/micromamba-root/envs/torchquad/lib/python3.10/site-packages/autoray/autoray.py:79: UserWarning: Explicitly requested dtype <class 'jax.numpy.float64'> requested in array is not available, and will be truncated to dtype float32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more.
    return get_lib_fn(backend, fn)(*args, **kwargs)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================== 48 passed, 28 warnings in 84.06s (0:01:24) ==================

Expected Behavior

Not or few warnings.

What Needs to be Done

Fix warnings ✌️

How Can It Be Tested or Reproduced

Running tests with environment for all backends or on github.

Integrate function with parameters

Hi, I would like to know whether there is functionality for integrating functions with parameters. For example, if one has a function

def func(alpha, x):
    return x**2 * alpha

how can one apply Trapezoid rule to integrate over x, for example? In the docs there is information about additional arguments of functions in BaseIntegrator class, but it seems like it is always set to None when applied to concrete methods.

If there is no such option, one may add something like args=(alpha,) to Trapezoid.integrate() or any other method.

Thanks in advance!

VEGAS slow?

Hi!

First of all, thanks for starting this project!

I was tinkering about with the VEGAS integrator and noticed that its speed (both on CPU (pytorch automatically uses multiple cores, right?) and GPU) is nowhere near as fast as the standard python package for vegas (with vectorized integrands) on just a single core. Results do match.

Have you tested torchquad vs standard vegas and ran into the same?

Curious to hear what you think about this :)

Any support for more advanced quadrature methods (e.g. Legendre-Gauss quadrature)?

Thanks for releasing this great work for differentiable quadrature in pytorch!
I recently want to develop a new operator for more advanced convolution, but unfortunately this operator involves integrating a one dimensional function with unlimited bound (from 0 to +inf), and each evaluation of this function is rather time-consuming. I can do a change of variable of make it within range [-1, 1]. But the problem is, to reduce the number of evaluations, I may need to use a more accurate quadrature method. I find gauss quadrature is probably the most accurate method. The number of evaluations can be constant for the same operator. So I'm asking if there exists more accurate/advanced quadrature methods in torchquad? For example, Legendre-Gauss, Romberg, adaptive simpson, or any other?

Add PyTorch Profiler to track performance

Feature

Desired Behavior / Functionality

Until now we have used cprofile to investigate performance of torchquad. However, especially for GPUs it may be interesting to start investigating with the new PyTorch profiler. Especially, as it has recently received some upgrades.

What Needs to Be Done

  • Set up things so that profiling with PyTorch profiler is easily doable
  • Profile!
  • Feel free to sum up your conclusions as to what optimization targets might be from this.
  • (optionally) add a notebook or some code to run a performance analysis of torchquad using the profiler.

Allow support of unequal numbers of points per dimension

Support for unequal numbers of points per dimension

Desired Behavior / Functionality

Torchquad currently only supports equal numbers of points per dimension for the deterministic methods (that use integration_grid.py). The user might want to use more points in one dimension than the others (if, for instance, the integration domain in one dimension is much larger than in the others or the function is more complex in certain dimensions).
Exactly how this should be implemented can be a bit up the contributor - perhaps the ideal solution is allowing many different types of input (see below).

What Needs to Be Done

Under torchquad.integration_grid.py, we have the following comment # TODO Add that N can be different for each dimension.
Currently, what happens is that the number of sample points per dimension (self._N) is set to the _n_th root of the total number of sample points, N, where n is the number of dimensions.
self._N = int(N ** (1.0 / self._dim) + 1e-8) # convert to points per dim (the 1e-8 term here is explained in the code).

It would be nice to allow self_N to be for instance [16, 2, 2] instead of [4, 4, 4].

One solution could be to do the following:

If N is an integer, then let self._N be the _n_th root of N.
If N is a list/array, then let self._N be the list/array.

Note: For Monte Carlo and VEGAS, varying the number of points per dimension doesn't make sense.

How Can It Be Tested

A good place to start is to check if the tests fail, in particular the integration_grid_test.py.
Ideally, additional tests should be added to integration_grid_test.py.

Example/documentation for parametric domain of integration?

Hi,

I found issue #170 asking about the case where both the integrand and the domain of integration are functions of parameters, and it looks like @ilan-gold implemented this feature in pull request #173. I'd like to use this feature and wanted to ask if there are any examples or documentation showing how to use it.

Thanks!

Regression in tests with TF

Issue

Problem Description

Boole test fails with

 =========================== short test summary info ============================
FAILED boole_test.py::test_integrate_tensorflow - tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: in user code:

    File "/home/runner/work/torchquad/torchquad/torchquad/tests/../integration/grid_integrator.py", line 131, in calculate_grid  *
        grid = IntegrationGrid(
    File "/home/runner/work/torchquad/torchquad/torchquad/tests/../integration/integration_grid.py", line 44, in __init__  **
        self._check_inputs(N, integration_domain, disable_integration_domain_check)
    File "/home/runner/work/torchquad/torchquad/torchquad/tests/../integration/integration_grid.py", line 114, in _check_inputs
        dim = _check_integration_domain(integration_domain)
    File "/home/runner/work/torchquad/torchquad/torchquad/tests/../integration/utils.py", line 209, in _check_integration_domain
        if boundaries_are_invalid:

OperatorNotAllowedInGraphError: Using a symbolic `tf.Tensor` as a Python `bool` is not allowed. You can attempt the following resolutions to the problem: If you are running in Graph mode, use Eager execution mode or decorate this function with @tf.function. If you are using AutoGraph, you can try decorating this function with @tf.function. If that does not work, then you may be using an unsupported feature or your source code may not be visible to AutoGraph. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information.
============= 1 failed, 51 passed, 75 warnings in 78.24s (0:01:18) =============
Error: Process completed with exit code 1.

Expected Behavior

Test passing :)

What Needs to be Done

Not quite sure, @FHof do you have an idea maybe? Decorating the entire is probably not possible given that requires tf install?

How Can It Be Tested or Reproduced

Running boole test with TF.

Add QUADPACK to torchquad

Feature

Desired Behavior / Functionality

The state of the art in deterministic integration methods is arguably QUADPACK. However, there are no GPU implementations of it.

What Needs to Be Done

  • Identify good baseline implementation of QUADPACK
  • Implement integrator in integration folder, should inherit from BaseIntegrator
  • Add docs for it
  • Add tests for it, orient at Newton-Cotes tests

How Can It Be Tested

  • New tests
  • Compare to baseline (e.g. scipy)

Cannot import torchquad (conda install)

Issue

Problem Description

import torchquad fails, throwing a ModuleNotFoundError after installing using conda install torchquad -c conda-forge (with success).

Python version: 3.8.12
Torchquad version: 0.3.0

>>> import torchquad
Traceback (most recent call last):
  File "/Users/tcantelo/miniforge3/envs/causal/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3378, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-2-b1a203752725>", line 1, in <module>
    import torchquad
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/Users/tcantelo/miniforge3/envs/causal/lib/python3.8/site-packages/torchquad/__init__.py", line 6, in <module>
    from .integration.integration_grid import IntegrationGrid
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/Users/tcantelo/miniforge3/envs/causal/lib/python3.8/site-packages/torchquad/integration/integration_grid.py", line 6, in <module>
    from .utils import (
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
  File "/Users/tcantelo/miniforge3/envs/causal/lib/python3.8/site-packages/torchquad/integration/utils.py", line 18, in <module>
    from utils.set_precision import _get_precision
  File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
    module = self._system_import(name, *args, **kwargs)
ModuleNotFoundError: No module named 'utils.set_precision'; 'utils' is not a package

Any help would be appreciated! 😄

Code style in the tests

It could be possible to change the tests code so that it is easier to read and maintain:

  • Currently the tests use sys.path.append for the imports instead of importing torchquad, so __init__.py is not executed. This also hinders the use of relative imports, e.g. in integration/utils.py.
  • It may be possible to shorten the code by replacing the setup_test_for_backend with parameterized tests.
  • utils_integration_test.py shows warnings while the other code uses pytest skips if a backend is not installed. The code could be changed so that pytest skips are used everywhere.

Some of these changes may however make it more difficult to support the execution of the test files with python3 in addition to pytest.

A documentation on how to execute the tests could also be helpful, for example because of GPU out-of-memory problems due to the backend imports:
The test executions on GPU currently may require environment variables which change the memory allocation behaviour of the backends since all backends are imported one after another and some of them can reserve the whole GPU memory.
These environment variables are, for example, XLA_PYTHON_CLIENT_PREALLOCATE=false, TF_FORCE_GPU_ALLOW_GROWTH=true and TF_GPU_ALLOCATOR=cuda_malloc_async. It is also possible to execute the tests on the CPU with CUDA_VISIBLE_DEVICES="".

Change behavior of 'backend'

Feature

After upgrading from 0.2.3 to 0.3.0 I encountered an issue where, when integrating using the Simpson() class, the backend grid had changed from a pytorch tensor to a numpy array. After making a cursory read of the documentation, I attempted setting the backend flags both in the integrator and using set_backend(). After this did not work, I was about to file an issue when I looked more closely and found that this input was ignored when the integration_bounds variable was not a tensor. I was then able to fix my issue. However, I think this decision should be reconsidered, as I did not find it intuitive.

Desired Behavior / Functionality

backend should be less ignored - at the very least, a warning should be raised stating that it has been ignored in favor of the integration bounds. Ideally, the behavior should be more or less consistent between classes.

Edit: Added specifity by saying I used Simpson() method

Start investigating test coverage

Feature

Desired Behavior / Functionality

Currently, there are several tests for a multitude of components of torchquad. However, we have never investigated test coverage. This may be desirable, especially, as when the project grows over time we are bound to face regressions if there is a large number of untested parts.

What Needs to Be Done

  • Find a good way to investigate test coverage of current tests.
  • Implement and add to docs how it can be used
  • Add some feedback on test coverage to CI if it makes sense
  • Improve test coverage (optional)

torch.tensor containing integers as integration domain returns zero with non-compiled integrator

Hi,

first of all thanks a lot for this great package, I've been using it heavily over the past few weeks and I'm very happy about its performance.

The issue: today I started using the functionality for calculating derivatives with respect to domain boundaries. I had previously employed jit-compiled integrators which require torch tensors as arguments for the integration domains. Compilation currently seems to be incompatible with computing the gradients, so I switched back to the non-compiled integrator, leaving everything else untouched. I then noticed that the integration consistently returns zero unless I pass the integration domain as a list instead of a torch tensor.

Example code to reproduce:

integrator = Simpson()
print(integrator.integrate(
    lambda x: x,
    dim=1,
    N=101,
    integration_domain=[[0,1]],
    backend='torch'))
print(integrator.integrate(
    lambda x: x,
    dim=1,
    N=101,
    integration_domain=torch.tensor([[0,1]]),
    backend='torch'))
jit_integrator = integrator.get_jit_compiled_integrate(
    dim=1,
    N=101,
    backend='torch')
print(jit_integrator(
    lambda x: x,
    torch.tensor([[0,1]])))

This returns the values 0.5, 0., 0.5, but they should of course all be the same. This is unexpected behavior, it should at least throw an error / a warning to inform the user about the incorrect type. Although IMHO it should ideally allow lists, numpy arrays, and torch tensors, for the integration domain argument. It's a minor issue but may lead to confusion (as in my case), so it might be a good idea to catch this somehow.

Cheers

Edit: I found that if I use torch.tensor([[0.,1.]]) instead of torch.tensor([[0,1]]), it does work as expected. So the problem is not the type of the object, but that it contains integers instead of floats. I still believe this should be fixed or a warning issued. I'm not sure why the jit-compiled version is not affected by this bug.

Elementwise numerical integration

Issue (how-to)

Problem Description

I need to perform numerical integration on each element of a tuple of tensor. The tuple are the parameters of a normal distribution. The integration domain can be determined by these as well. From my understanding, torchquad allows to perform one integration at a time. I could repeatedly launch an integration in a for loop fashion for each element of the tensor: that however sounds counterintuitive with the practice of tensor libraries. Is there a way for me to do it properly ?

Here is a gist with the code for the problem described: https://gist.github.com/sebastienwood/a8aafd4b4480e12c7dae3923d89d7a4c

Let me know if I can provide more information !

Release 0.4.0

Feature

Changelog

to be written during release process

What Needs to Be Done (chronologically)

  • Create PR from main -> develop to incorporate hotfixes / documentation changes.
  • In case of merge conflicts create PR to fix them which is then merged into main / fix on GitHub but make sure to let it create a new branch for the changes.
  • Review the PR
  • Create PR to merge from current develop into release branch
  • Write Changelog in PR and request review
  • Review the PR (if OK - merge, but DO NOT delete the branch)
  • Minimize packages in requirements.txt and conda-forge submission. Update packages in setup.py
  • Check unit tests -> Check all tests pass and there are tests for all important features (go to cmder, activate TQ, go to tests folder, type "pytest" and run)
  • Check documentation -> Check presence of documentation for all features by locally building the docs on the release
  • Change version number in setup.py and docs (under conf.py)
  • Trigger the Upload Python Package to testpypi GitHub Action (https://github.com/esa/torchquad/actions/workflows/deploy_to_test_pypi.yml) on the release branch (need to be logged in)
  • Test the build on testpypi (with pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple torchquad)
  • Finalize release on the release branch
  • Create PR: release → main , release -> develop
  • PR Reviews
  • Merge release back into main, and develop
  • Create Release on GitHub from the last commit (the one reviewed in the PR) reviewed
  • Upload to PyPI
  • Update on conda following https://conda-forge.org/docs/maintainer/updating_pkgs.html

Add more complex examples to tutorials showcasing differentiability

Feature

Desired Behavior / Functionality

torchquad allows fully differentiable numerical integration. This can enable neural network training through integrals. This capability deserves a dedicated example. There is a example on the gradient computations in the docs already. However, training a simple neural network on e.g. a gravitational potential or similar problem could be a cool example?

What Needs to Be Done

  • Find a nice example to showcase neural network training / optimization capabilities due to torchquad's differentiability
  • Add to docs

Tests failing on GPU

Issue

Problem Description

Related to Release 0.4.0, can be fixed directly on release branch.

Currently the tests fail on GPUs, for torch because of a missing transfer from GPU memory to host. For TF there seems to be a breaking API change ( ImportError: cannot import name 'np_config' from 'tensorflow.python.ops.numpy_ops' , see https://stackoverflow.com/questions/75727569/cannot-import-name-np-config-from-tensorflow-python-ops-numpy-ops )

Logs:

pytorch_gpu: pytest_gpu0.log

TF_gpu, torch_cpu (failed to get JAX working since this was on a win machine): pytest_all_gpu0.log

Setting up an env to check with all frameworks on GPU also proved time-consuming. (and failed for jax)

Expected Behavior

  • Tests passing.
  • (optional) Easier way to check in the future

What Needs to be Done

  • Add calls to transfer to host memory
  • Fix / update TF API call
  • (optional) add another .yml for GPU version of all libs.

How Can It Be Tested or Reproduced

Run pytest , used following env

name: torchquad_all
channels:
  - anaconda
  - conda-forge
  - pytorch
  - nvidia
dependencies:
  - autoray>=0.2.5
  - loguru>=0.5.3
  - matplotlib>=3.3.3
  - pytest>=6.2.1
  - python>=3.8
  - scipy>=1.6.0
  - sphinx>=3.4.3
  - tqdm>=4.56.0
  # Numerical backend installations with CUDA support where possible:
  - numpy>=1.19.5
  - pytorch>=1.9 # CPU version
  - tensorflow-gpu
  # jaxlib with CUDA support is not available for conda
  - pip:
      - --find-links https://storage.googleapis.com/jax-releases/jax_releases.html
      - jax[cpu]>=0.2.22 # this will only work on linux. for win see e.g. https://github.com/cloudhan/jax-windows-builder
        # CPU version

Logging with loguru not implemented correctly

Issue

Problem Description

Current usage may overwrite loguru logger in other applications

Expected Behavior

Not interfering with user's logging

What Needs to be Done

Correct implementation is described here and here

How Can It Be Tested or Reproduced

Something along the lines of

from loguru import logger
logger.info("This should be printed")
import torchquad as tq
logger.info("This may not get printed I think")

Let user choose which GPU to use

Feature

Desired Behavior / Functionality

Currently, one can only enable or disable cuda and also only do so globally using the torchquad.set_up_backend function. First of all, this means that even on multi-GPU machines one can only ever use the first device "cuda:0". Secondly, it means that using torchquad can break existing code that one tries to integrate it into, because the set_up_backend function globally changes how torch Tensors are initialized. Instead I propose to include device as an optional argument in the integrate function.

What Needs to Be Done

Unfortunately, I am not familiar enough with the library's code to make informed comments on how this can be implemented. I suspect that it's actually a fairly difficult request.

depricated torch.set_default_tensor_type()

Issue

Problem Description

torch.set_default_tensor_type() as of PyTorch 2.1

Expected Behavior

What Needs to be Done

use torch.set_default_dtype() and torch.set_default_device() as alternatives

How Can It Be Tested or Reproduced

use set_up_backend("torch") with PyTorch >= 2.1

Improve autoformatting of PRs

Feature

Desired Behavior / Functionality

Ideally we want to autoformat every PR with black. Right now there is a GitHub Action in https://github.com/esa/torchquad/blob/main/.github/workflows/autoblack.yml for this. However, it doesn't work anymore and it is annoying because it adds commits to people's PRs (potentially without them noticing).

Is there a better option as a hook somewhere during merge e.g.?

What Needs to Be Done

How Can It Be Tested

Change should be visible on the PR for this?

CI failing on coverage check

Issue

Problem Description

Currently tests e.g. on #185 and #188 fail due an error with the coverage comment

Expected Behavior

Not failing

What Needs to be Done

TBD

How Can It Be Tested or Reproduced

Run CI

Low Discrepancy Sequences for MonteCarlo and VEGAS

Feature

Desired Behavior / Functionality

Low discrepancy sequences could be used as an optional replacement for random number generation for MonteCarlo and VEGAS.
With certain integrands, this could lead to a higher average accuracy.

What Needs to Be Done

  • Add a class similar to the RNG class to generate numbers with a low discrepancy sequence instead of a PRNG. An instance of it can be passed as rng argument to VEGAS and MonteCarlo.
    This class could use, for example, PyTorch's and TensorFlow's sobol sequences: https://www.tensorflow.org/api_docs/python/tf/math/sobol_sample, https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html
  • Add a function to the number generator classes which samples points and use it for MonteCarlo and VEGAS instead of uniform. In comparison to uniform, the output of this function always corresponds to points in a space, where conceptually a distance function is defined.
  • Change MonteCarlo.get_jit_compiled_integrate so that it works with the number generator class for low discrepancy sequences.

How Can It Be Tested

It can be tested with additional tests in the torchquad/tests folder.

Backend strategy

Hi all, first of all, great work! Numerical integration methods in compiled, gradient supporting frameworks is really lacking.

My question is about backends. At least so far it seems that only a minor set of the torch methods are used. Since there are other libraries, notably JAX and TensorFlow, which both have a numpy-like API such as PyTorch, it seems nearly trivial to me to support this backends as well, at least for the status of the project now.

There are already a few dedicated projects in this niche, such as vegasflow and some onedim integrator.

I think the limitations of all frameworks are quite similar: they are great at vectors and bat at adaptive/sequential methods. I am also aware though that it can potentially make more sophisticated integration methods more difficult to implement, such as when using control flow.

Do you have any thoughts on this?

P.S: the motivation comes from a likelihood model fitting library that we use in High Energy Physics and which uses (currently) TensorFlow (and probably JAX in the future). Numerically integrate a function efficiently to make it a PDF is therefore key.

Release 0.3.0

Feature

Changelog

Major

  • Added support for NumPy, JAX, Tensorflow via autoray for most integrators. Refer to docs for more details.
  • Huge performance improvements to VEGAS

Minor

  • More tests
  • New environment variable called TORCHQUAD_LOG_LEVEL to conveniently control loglevel, default changed to "warning"
  • Support for (JIT) compilation of the integration, except VEGAS
  • More strict code linting with flake8 and a corresponding small code cleanup
  • Docs improvements
  • Custom RNG class
  • Refactoring of Newton-Cotes integrators

What Needs to Be Done (chronologically)

  • Create PR from main -> develop to incorporate hotfixes / documentation changes.
  • In case of merge conflicts create PR to fix them which is then merged into main.
  • Review the PR
  • Create PR to merge from current develop into release branch
  • Write Changelog in PR and request review
  • Review the PR (if OK - merge, but DO NOT delete the branch)
  • Minimize packages in requirements.txt and conda-forge submission. Update packages in setup.py
  • Check unit tests -> Check all tests pass and there are tests for all important features (go to cmder, activate TQ, go to tests folder, type "pytest" and run)
  • Check documentation -> Check presence of documentation for all features by locally building the docs on the release
  • Change version number in setup.py and docs (under conf.py)
  • Trigger the Upload Python Package to testpypi GitHub Action (https://github.com/esa/torchquad/actions/workflows/deploy_to_test_pypi.yml) on the release branch (need to be logged in)
  • Test the build on testpypi
  • Finalize release on the release branch
  • Create PR: release → main , develop -> main
  • PR Reviews
  • Merge release back into main, and develop
  • Create Release on GitHub from the last commit (the one reviewed in the PR) reviewed
  • Upload to PyPI
  • Update on conda following https://conda-forge.org/docs/maintainer/updating_pkgs.html

Move the release process for torchquad from JIRA

Feature

Desired Behavior / Functionality

Currently process for creating a release is described in a private JIRA.

What Needs to Be Done

Move the release process description to GitHub wiki / readthedocs for transparency and ensured continuous availability.

Vectorize VEGAS over dimensionality

Feature

Desired Behavior / Functionality

We would like to vectorize VEGAS over dimensionality to speed it up.

What Needs to Be Done

In the files vegas_map.py and vegas_stratification.py there are several loops iterating over the dimension. These should be vectorized to speed up integration with vegas. This should be particularly beneficial for high-dimensional problems.

In particular, the following functions are good candidates:

  • vegas_map.py - get_X
  • vegas_map.py - get_Jac
  • vegas_map.py - accumulate_weight
  • vegas_map.py - _smooth_map
  • vegas_map.py - update_map
  • vegas_stratification.py - _get_indices

How Can It Be Tested

All unit tests should still pass (they test error convergence)

A good way of testing runtime is by using the vegas_test.py in the tests folder. If called directly with python vegas_test.py it will print out a cprofile of the tests. Speedups should be particularly noticeable for high-dimensional problems.

VEGAS is not available when installing via pip

Hello,

I installed the package via pip and couldn't import VEGAS using:

from torchquad import VEGAS

I checked the installed package and the torchquad.integration module doesn't contain vegas.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.