Giter Site home page Giter Site logo

hcubature.jl's People

Contributors

andreasnoack avatar araujoms avatar chrisrackauckas avatar dependabot[bot] avatar github-actions[bot] avatar jagot avatar jneem avatar juliatagbot avatar jwscook avatar ksmcreynolds avatar lxvm avatar maltezfaria avatar pabloferz avatar ranocha avatar stevengj avatar timholy avatar yakir12 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hcubature.jl's Issues

Error estimation

I must say that the error estimation is not very trustful.
The following code

function fmax(x)
  return max(x[1],x[2], (1-x[1])*(1-x[2]))
end

r1 = hcubature(fmax, [0, 0], [1, 1]; norm=norm, rtol=1e-14, atol=0, maxevals=10^8, initdiv=1)
println(r1)

r2 = hcubature(fmax, [0, 0], [1, 1]; norm=norm, rtol=1e-14, atol=0, maxevals=10^8, initdiv=10)
println(r2)

outputs

(0.7287375296474494, 3.3787061040185426e-14)
(0.7287375317249826, 2.8626353358787654e-14)

The results differ in 8th digit, while the estimated error is 3e-14

Parallel evaluation of the integrand

In the previous Cubature.jl package, it is possible to evaluate the integrand in parallel by requiring the function f to accept matrix arguments. Is it possible such feature can be supported in HCubature.jl?

GPU support

What would it take to make hcubature GPU-friendly? In the sense that there is no scalar indexing. Is that even possible?

MWE:

using HCubature, CUDA
CUDA.allowscalar(false)
hcubature(x -> sum(abs2, x), CuArray(fill(-3, 3)), CuArray(fill(3, 3))) # ERROR: Scalar indexing is disallowed.

Automatic support for infinite domains

I would like to integrate a two-dimensional function from -Inf to Inf. To explore my possiblities, I wrote this simple function

f(x) = x.^2 .*pdf.(Normal(), 2 .*x)

It works without problems when using QuadGK, but does not converge at all using HCubature:

using QuadGK, Distributions, HCubature

f(x) = x.^2 .*pdf.(Normal(), 2 .*x)
quadgk(f, -Inf, Inf)
hcubature(f, [-Inf], [Inf] )

Is this a well known problem?

Performance versus `quadgk`

Hi, I made some benchmarks and noted that for 1-d integral quadgk is way more fast than hquadrature. Here is a simple example

julia> using HCubature, BenchmarkTools

julia> f() = hquadrature(t -> exp(-t)/t, 1, 100000)
f (generic function with 1 method)

julia> g() = HCubature.QuadGK.quadgk(t -> exp(-t)/t, 1, 100000)
g (generic function with 1 method)

julia> @btime f()
  26.478 μs (1131 allocations: 31.97 KiB)
(0.21938393439552029, 1.3846093405775578e-9)

julia> @btime g()
  10.078 μs (339 allocations: 8.02 KiB)
(0.2193839343955203, 1.3846093658126016e-9)

Is this difference on algorithm level or just because quadgk is internally optimized?

Thanks!

binary_maxheap is deprecated

┌ Warning: `binary_maxheap(::Type{T}) where T` is deprecated, use `BinaryMaxHeap{T}()` instead.
│   caller = hcubature_(::Function, ::SArray{Tuple{2},Float64,1,2}, ::SArray{Tuple{2},Float64,1,2}, ::typeof(LinearAlgebra.norm), ::Int64, ::Int64, ::Int64, ::Int64) at HCubature.jl:64
└ @ HCubature ~/.julia/packages/HCubature/BNvCh/src/HCubature.jl:64

Very minor speed-up in countevals

countevals in genz-malik.jl is very cheap and is called just once for each call to hcubature, so improving it isn't definitely a priority. However, its result can be cached by making countevals a generated function, for example in this way:

get_dimensions(::Type{GenzMalik{n,T}}) where {n,T} = n
@generated function countevals(g::GenzMalik{n}) where {n}
    m = get_dimensions(g)
    1 + 4m + 2*m*(m-1) + (1<<m)
end

I didn't open a PR because I'm not sure this is really worth.

Stubborn function

I have a function that accepts a 2-dimentional vector and returns a scalar. If I sample only 100 points across both of its two domains I get this:
screenshot_2018-11-30_09 32 24
Admittedly, there are only two regions where the function is not zero and they are kind of small. But no matter how small I set the rtol to, or how large I set the maxevals to, this still results in zero:

julia> hcubature(getsignal, [0, 0], [1, 0.5], rtol=10^-100, maxevals=10^100)
(0.0, 0.0)

and very quickly. Any ideas on what I can do?

Return internal information for future use

I'm currently doing a lot of cubature integrals, and it's quite slow due to the shear number of integrals. However, I suspect that the number of intervals and/or quadrature points at convergence are quite similar from one integral to another. Is there is an opportunity for speed up if this information can be passed from one cubature evaluation to another, so that the algorithm can start off from a better position?

Top work on this package btw!

Type instability when using a vector for the limits.

Hi, I noticed that I was getting a type instability when using hcubature() inside a function I created. I did some tests and that just happens when I use a vector for a and b, the same didn't happen with a tuple.

Here is what I get from @code_warntype:

julia> using HCubature

julia> f(x,y) = x + y
f (generic function with 1 method)

julia> @code_warntype hcubature(x->f(x[1],x[2]),[0,0],[1,1])
Variables
  #self#::Core.Compiler.Const(HCubature.hcubature, false)
  f::Core.Compiler.Const(var"#9#10"(), false)
  a::Array{Int64,1}
  b::Array{Int64,1}
  norm::typeof(LinearAlgebra.norm)
  rtol::Int64
  atol::Int64
  maxevals::Int64
  initdiv::Int64

Body::Tuple{Any,Any}
1 ─ %1 = HCubature.norm::Core.Compiler.Const(LinearAlgebra.norm, false)
│        (norm = %1)
│        (rtol = 0)
│        (atol = 0)
│        (maxevals = HCubature.typemax(HCubature.Int))
│        (initdiv = 1)
│   %7 = HCubature.:(var"#hcubature#3")(norm, rtol::Core.Compiler.Const(0, false), atol::Core.Compiler.Const(0, false), maxevals::Core.Compiler.Const(9223372036854775807, false), initdiv::Core.Compiler.Const(1, false), #self#, f, a, b)::Tuple{Any,Any}
└──      return %7

julia> @code_warntype hcubature(x->f(x[1],x[2]),(0,0),(1,1))
Variables
  #self#::Core.Compiler.Const(HCubature.hcubature, false)
  f::Core.Compiler.Const(var"#11#12"(), false)
  a::Tuple{Int64,Int64}
  b::Tuple{Int64,Int64}
  norm::typeof(LinearAlgebra.norm)
  rtol::Int64
  atol::Int64
  maxevals::Int64
  initdiv::Int64

Body::Tuple{Float64,Float64}
1 ─ %1 = HCubature.norm::Core.Compiler.Const(LinearAlgebra.norm, false)
│        (norm = %1)
│        (rtol = 0)
│        (atol = 0)
│        (maxevals = HCubature.typemax(HCubature.Int))
│        (initdiv = 1)
│   %7 = HCubature.:(var"#hcubature#3")(norm, rtol::Core.Compiler.Const(0, false), atol::Core.Compiler.Const(0, false), maxevals::Core.Compiler.Const(9223372036854775807, false), initdiv::Core.Compiler.Const(1, false), #self#, f, a, b)::Tuple{Float64,Float64}
└──      return %7

In the documentation, it says that both options can be used. Any ideas on why this is happening?

Incorrect and variable results for smooth function depending on initdiv

I'm getting inconsistent and incorrect results for integrating a very smooth function:

julia> f = v -> begin
                  c2 = norm(v)^2
                  c2 * pi^(-3/2) * exp(-c2)
              end
#3402 (generic function with 1 method)

julia> a
3-element Array{Float64,1}:
 -100.0
 -100.0
 -100.0

julia> b
3-element Array{Float64,1}:
 100.0
 100.0
 100.0

julia> hcubature(f, a, b, atol=0, initdiv=1)
(0.0, 0.0)

julia> hcubature(f, a, b, atol=0, initdiv=2)
(0.18749999994468158, 2.7930336351243728e-9)

julia> hcubature(f, a, b, atol=0, initdiv=3)
(1.4999999999365934, 2.235159620943183e-8)

julia> hcubature(f, a, b, atol=0, initdiv=4)
(0.18749999994468125, 2.7930336351226787e-9)

The correct answer is 1.5, as can be verified with Mathematica. Perhaps this has something to do with the function being radially symmetric? It also probably has to do with the integrand being zero (to within machine precision) at the boundaries. If I use boundaries at (-4, 4) instead of (-100, 100), I get the right answer.

Errors with BigFloat boundaries

The following function errors when I call it as erf(BigFloat). It should work, no? It doesn't complain if I call it with other funny types that are nevertheless isbits.

function erf(T)
    lb, ub = T(0), T(1)
    f(x) = exp(-x^2)
    sol = hquadrature(f, lb, ub)
    return sol[1] 
end

Promotion of endpoints to a common type?

I was a bit surprised that this failed:

 hcubature(x -> 2.0, (0,0), (2pi, pi))

(Forcing pi to a float through 1pi works, or course.)

Something similar happens with hcubature(x -> 2.0, (0,0), (2, 2.0)). Would there be any drawbacks to promoting the endpoints to a common type? I see this "T is a floating-point type determined by promoting the endpoint a and b coordinates to a floating-point type" in the README, which suggests that should happen, right?

Bounds order while using hquadrature

Hi,
I noticed some weird behaviour while using hquadrature:
hquadrature(t->1, 0, -1)[1] outputs 1, while, mathematically speaking, it should be -1. Is this intended, or does anybody have an idea of why is this happening ?

Thank you in advance for your help !

Any plans to add Symbolics.jl inside HCubature.jl?

I have more a question/suggestion than an issue.

Does HCubature.jl will get more features if it incorporates Symbolics.jl?

The scenario that comes to my mind are Double Integrals, #16 - I imagine that Symbolics.jl could perform the change of variables and Jacobian to solve the problem.

Thank you.

Tagging is misconfigured?

Currently the version of HCubature is 1.6.0 in the general registry, but 1.5.1 on github. I suppose some configuration for automatic tagging has gone awry?

Internal error got on Julia1.2.0

One of the code using HCubature.jl got internal error on Julia1.2.0 .
Here is an example how to reproduce the error:

$ docker run --rm -it julia:1.2.0 \
julia -e 'using Pkg; Pkg.add("HCubature")
          using HCubature
          f(x, y) = x*y^2
          g(f; m=1) = hcubature(w->f(w[1], w[2]), [0,0], [1,1])
          @show g(f)'

Result: see my gist

On the other hand, It works for Julia 1.1.1, 1.3.0 or 1.4.0-DEV(Commit d3250fe)

docker run --rm -it julia:1.3.0 \
julia -e 'using Pkg; Pkg.add("HCubature")
          using HCubature
          f(x, y) = x*y^2
          g(f; m=1) = hcubature(w->f(w[1], w[2]), [0,0], [1,1])
          @show g(f)'

HCubature METADATA not yet registered

Sorry to open this package issues list with such a trivial one. After Pkg.update(), I get

julia> Pkg.add("HCubature")
ERROR: unknown package HCubature
macro expansion at .\pkg\entry.jl:53 [inlined]
(::Base.Pkg.Entry.##1#3{String,Base.Pkg.Types.VersionSet})() at .\task.jl:335
Stacktrace:
 [1] sync_end() at .\task.jl:287
 [2] macro expansion at .\task.jl:303 [inlined]
 [3] add(::String, ::Base.Pkg.Types.VersionSet) at .\pkg\entry.jl:51
 [4] (::Base.Pkg.Dir.##4#7{Array{Any,1},Base.Pkg.Entry.#add,Tuple{String}})() at .\pkg\dir.jl:36
 [5] cd(::Base.Pkg.Dir.##4#7{Array{Any,1},Base.Pkg.Entry.#add,Tuple{String}}, ::String) at .\file.jl:59
 [6] #cd#1(::Array{Any,1}, ::Function, ::Function, ::String, ::Vararg{String,N} where N) at .\pkg\dir.jl:36
 [7] add(::String) at .\pkg\pkg.jl:117

Question about 2D integral with a complex function f

Dear @stevengj,

I would like to ask you a question regarding a 2D integral. Suppose that I have a complex function f as follows:

note:
x and y are scalars
A1, A2, A3, A4, and A5 are symmetric matrices
a and b are scalars
g1, g2, g3, and g4  are functions with parameters x, or y, or x and y

function f(x,y,A1,A2,A3,A4,A5,a,b) 
    C = g1(x)*A1 + g2(x)*A2+ g3(x,y)*A3 + A4 + g4(x,y)*A5
    Ω = pinv(C)
    return a*tr(Ω) + b*sum(Ω)
end

Suppose that parameters A1-A5, a, b g1-g4 are known, I would like to compute the following intergal:

I(f) = \int_{a_1}^{b_1}\int_{a_2}^{b_2} f(x,y,A1,A2,A3,A4,A5,a,b)dxdy

So, my question here is that would it be possible to compute I(f) using your package HCubarture with the complex function f? If possible, could you please tell me how to specify the function f in your package? Thank you very much in advance!

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

non rectangular integral

how we can compute a non rectangular integral (which some boundaries of the integral is a variable not a static value) with this package:
something like this:
image

Parallel integrand evaluation [Feature Req.]

I need a pure-julia integration library that support multi-dimensional, vector valued integrands and batch integrand evaluation. HCubature.jl hits all these criteria except for batch integrand evaluation as in Cubature.jl. Is adding a vectorized method on the horizon?

slow hcubature for large vector-valued integrands

I am integrating L(f) = \int K.(f .- g(x)) dx where f is a vector of 100-1000 elements, g and K are scalar, the integration is in R^3. The dimension of the codomain of K is the same as f because it is easy to vectorize the computation of K.

HCubature does worse than Cubature, the memory allocation being 100 times that of Cubature (SVectors do not perform well for large vectors, although it may not have anything to do with SVector) and 20 times slower. I am not sure if part of the problem is also that Cubature allows one to copy in place the result of the computation.

Here is a MWE with the timings on my machine. Is there a workaround or will passing an integrand with an extra argument that accepts an in-place return value be considered?

using Cubature
using HCubature
using StaticArrays: SVector
using BenchmarkTools

function test_HCubature(sf)
    cnt = 0

    function integr(pt)
        cnt += 1
        inv.(1 .+ (sf .+ pt[1]*cos(pt[2])*sin(pt[3])).^2)
    end
    xmin, xmax = [0.0,0,-2], [0.5, 2π,+2]
    r, e = HCubature.hcubature(integr, xmin, xmax; rtol=1e-1, atol=1e-1)
    #println("iter: $cnt")
    r, e
end

function test_Cubature(f)
    cnt = 0
    function integr(pt, res)
        cnt += 1
        res .= inv.(1 .+ (f .+ pt[1]*cos(pt[2])*sin(pt[3])).^2)
    end
    xmin, xmax = [0.0,0,-2], [0.5, 2π,+2]
    r, e = Cubature.hcubature(length(f), integr, xmin, xmax; reltol=1e-1, abstol=1e-1)
    #println("iter: $cnt")
    r, e
end

f = collect(range(-3,3,length=101))
sf = SVector{length(f)}(f)

@benchmark test_HCubature($sf)
BenchmarkTools.Trial:
  memory estimate:  1.18 MiB
  allocs estimate:  60213
  --------------
  minimum time:     2.439 ms (0.00% GC)
  median time:      2.578 ms (0.00% GC)
  mean time:        2.689 ms (1.55% GC)
  maximum time:     4.703 ms (34.03% GC)
  --------------
  samples:          1858
  evals/sample:     1

julia> @benchmark test_Cubature($f)
BenchmarkTools.Trial:
  memory estimate:  8.94 KiB
  allocs estimate:  160
  --------------
  minimum time:     15.500 μs (0.00% GC)
  median time:      16.800 μs (0.00% GC)
  mean time:        18.832 μs (2.05% GC)
  maximum time:     2.109 ms (98.33% GC)
  --------------
  samples:          10000
  evals/sample:     1

EDIT: timing with @benchmark.

Mimic `Cubature.INDIVIDUAL` behavior [Feature Req.]

I started to put a PR together to allow for convergence testing based on the individual integrands, like is default in Cubature.jl. However, I quickly realized it may get more involved than I initially thought as the error estimate E is assumed to be scalar in cubrule and the box updates.

Do you have any thoughts/suggestions for adding this feature? Thanks.

Check for NaN?

Currently if you try to naïvely integrate an equation that produces a NaN in the integration region, hcubature will just run forever, presumably because the rtol calculation doesn't check for NaN. Obviously, users should be thinking about their code and not integrating functions which produce NaN, but it'd be nice if there was and error thrown on encountering NaN instead of just silently running forever (unless one sets MaxEvals)

Here's a MWE:

f((x, y)) = y  0 ? NaN : x*y

hcubature(f, [-1, -1], [1, 1])

A question regarding nodes and weights in hcubature

Dear @stevengj

I would like to ask you a question when computing ∫∫g(x)*g(y)f(x,y)dxdy. I hope it is fine to post my question here. Below is my problem.

using QuadGK, HCubature
# get a full grid
function fullGrid(x1,x2,w1,w2)
    pW = vec(collect.(Iterators.product(w1,w2)))
    pX = vec(collect.(Iterators.product(x1,x2)))
    lp = length(pW)
    w = zeros(lp)
    x = zeros(lp,2)
    for i = 1:lp
        w[i] = pW[i][1]*pW[i][2]  
        x[i,1] = pX[i][1]
        x[i,2] = pX[i][2]       
    end
    return x,w
end
g(x) = 1/(1+x^2) # this is my weight function
h(x) = 1 # I simplify the function h(x). In my case, h(x) is quite complex and is very costly to compute.

I would like to compute ∫∫g(x)*g(y)f(x,y)dxdy in a hypercube [a, b]^2 where a,b≥0. Here I use two approaches based on your two packages (QuadGK and HCubature).

function test(n,lb,ub)
    # Approach 1: 
    # get quadrature nodes for each dimension with a weight function g(x)
    x1, w1 = QuadGK.gauss(x ->g(x), n, lb[1], ub[1], rtol=1e-10)
    x2, w2 = QuadGK.gauss(x ->g(x), n, lb[2], ub[2], rtol=1e-10)
    # obtain a full grid
    x, w = fullGrid(x1,x2,w1,w2)
    s1 = sum(h(x) .* w) # I made a mistake here. f ---> h
    f(x) = g(x[1])*g(x[2])*h(x)
    # Approach 2: using HCubature as a benchmark
    s2 = hcubature(x -> f(x), lb, ub)
return s1, s2[1]
end

Below are the results with the two approaches:

n = 20 # the number of sample points
lb1 = [0,0]
ub1 = [1,1]
# case 1
res1 = test(n,lb1,ub1)
# res1 = (0.6166469119120699, 0.616850275222724)

In this case, it looks fine with Approach 1.

# case 2:
lb2 = [6,6]
ub2 = [10,10]
res2 = test(n,lb2,ub2)
# res2 =(0.004287633663977574, 0.004287633663975462)

Approach 1 still works in this case when changing the intervals even though the weight function g(x) is a rapid decrease when x is large. Since I would like to get nodes and weights to compute ∫∫g(x)*g(y)f(x,y)dxdy, could I extract nodes and weights with hcubature given the weight functions g(x) and g(y), and the function f(x,y) is very costly to compute?

Thank you very much for your help. I am sorry if my question is very basic since I am a beginner at numerical integration.

hcubature not type stable

Related: SciML/Integrals.jl#65

using HCubature, Distributions, Test

μ = [0.00 , 0.00]
Σ = [0.4 0.0 ; 0.00 0.4]
d = MvNormal(μ , Σ)
f = (x) -> pdf(d, x)
lb, ub = [-1 , -1] , [1 , 1]
@inferred hcubature(f, lb, ub;
            maxevals=typemax(Int))
ERROR: return type Tuple{Float64, Float64} does not match inferred return type Tuple{Any, Any}
Stacktrace:
 [1] error(s::String)
   @ Base ./error.jl:33
 [2] top-level scope
   @ REPL[7]:1

Inconsistent results for Inf integrands

I am looking at some SciML problems where, depending on the stability of the ODE, the integrand used w/ HCubature.jl may return Inf. I am noticing inconsistent behavior across hquadrature and hcubature for single and multi-dim domains.

using HCubature

g(x) = x[1] > 0.0 ? Inf : x[1]

hquadrature(g, -1.0, 1.0)    #Inf
hcubature(g, -ones(1), ones(1)) #Inf
hcubature(g, -ones(2), ones(2)) #NaN

I see similar behavior in Cubature.jl as well, but Cubature.hcubature returns Inf up to dim = 4 and then NaN, while Cubature.pcubature returns Inf up dim = 17 (haven't test passed that).

Is this the expected behavior? I assume this is due to some Inf*0 that is propagating down.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.