Giter Site home page Giter Site logo

logexpfunctions.jl's Introduction

LogExpFunctions.jl

CI codecov.io Documentation Documentation

Various special functions based on log and exp moved from StatsFuns.jl into a separate package, to minimize dependencies. These functions only use native Julia code, so there is no need to depend on librmath or similar libraries. See the discussion at StatsFuns.jl#46.

The original authors of these functions are the StatsFuns.jl contributors.

logexpfunctions.jl's People

Contributors

andreasnoack avatar ararslan avatar bdeonovic avatar cossio avatar devmotion avatar dmbates avatar github-actions[bot] avatar itsdfish avatar jishnub avatar johnmyleswhite avatar kristofferc avatar lindahua avatar longemen3000 avatar lucasondel avatar oscardssmith avatar oschulz avatar oshikiri avatar simonbyrne avatar simsurace avatar tpapp avatar ven-k avatar xukai92 avatar yuyichao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

logexpfunctions.jl's Issues

`init` keyword for `logsumexp` and co

It would be great to have a init keyword similar to sum for empty collections.

I would basically expect

logsumexp(Float64[]) == -Inf

It just makes it slightly complicated to make it type stable so I am not sure how to go about this.

` AbstractIrrational` does not play nice with CUDA

Hi, it seems that many of the functions are not compatible with CUDA.jl out of the box due to dynamic precision (?). Here's a MWE:

LogExpFunctions.log1mexp.(CuVector([-1f0, -2f0, -3f0]))
ERROR: InvalidIRError: compiling MethodInstance for (::GPUArrays.var"#broadcast_kernel#26")(::CUDA.CuKernelContext, ::CuDeviceVector{Float32, 1}, ::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Tuple{Base.OneTo{Int64}}, typeof(log1mexp), Tuple{Base.Broadcast.Extruded{CuDeviceVector{Float32, 1}, Tuple{Bool}, Tuple{Int64}}}}, ::Int64) resulted in invalid LLVM IR
Reason: unsupported dynamic function invocation (call to var"#setprecision#25"(kws::Base.Pairs{Symbol, V, Tuple{Vararg{Symbol, N}}, NamedTuple{names, T}} where {V, N, names, T<:Tuple{Vararg{Any, N}}}, ::typeof(setprecision), f::Function, ::Type{T}, prec::Integer) where T @ Base.MPFR mpfr.jl:969)
Stacktrace:
 [1] setprecision
   @ ./mpfr.jl:969
 [2] Type
   @ ./irrationals.jl:69
 [3] <
   @ ./irrationals.jl:96
 [4] log1mexp
   @ ~/.julia/packages/LogExpFunctions/jq98q/src/basicfuns.jl:234
 [5] _broadcast_getindex_evalf
   @ ./broadcast.jl:683
 [6] _broadcast_getindex
   @ ./broadcast.jl:656
 [7] getindex
   @ ./broadcast.jl:610
 [8] broadcast_kernel
   @ ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:59
Hint: catch this exception as `err` and call `code_typed(err; interactive = true)` to introspect the erronous code with Cthulhu.jl
Stacktrace:
  [1] check_ir(job::GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, args::LLVM.Module)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/validation.jl:149
  [2] macro expansion
    @ ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:415 [inlined]
  [3] macro expansion
    @ ~/.julia/packages/TimerOutputs/RsWnF/src/TimerOutput.jl:253 [inlined]
  [4] macro expansion
    @ ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:414 [inlined]
  [5] emit_llvm(job::GPUCompiler.CompilerJob; libraries::Bool, toplevel::Bool, optimize::Bool, cleanup::Bool, only_entry::Bool, validate::Bool)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/utils.jl:89
  [6] emit_llvm
    @ ~/.julia/packages/GPUCompiler/YO8Uj/src/utils.jl:83 [inlined]
  [7] codegen(output::Symbol, job::GPUCompiler.CompilerJob; libraries::Bool, toplevel::Bool, optimize::Bool, cleanup::Bool, strip::Bool, validate::Bool, only_entry::Bool, parent_job::Nothing)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:129
  [8] codegen
    @ ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:110 [inlined]
  [9] compile(target::Symbol, job::GPUCompiler.CompilerJob; libraries::Bool, toplevel::Bool, optimize::Bool, cleanup::Bool, strip::Bool, validate::Bool, only_entry::Bool)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:106
 [10] compile
    @ ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:98 [inlined]
 [11] #1037
    @ ~/.julia/packages/CUDA/tVtYo/src/compiler/compilation.jl:104 [inlined]
 [12] JuliaContext(f::CUDA.var"#1037#1040"{GPUCompiler.CompilerJob{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}})
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/driver.jl:47
 [13] compile(job::GPUCompiler.CompilerJob)
    @ CUDA ~/.julia/packages/CUDA/tVtYo/src/compiler/compilation.jl:103
 [14] actual_compilation(cache::Dict{Any, CuFunction}, src::Core.MethodInstance, world::UInt64, cfg::GPUCompiler.CompilerConfig{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, compiler::typeof(CUDA.compile), linker::typeof(CUDA.link))
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/execution.jl:125
 [15] cached_compilation(cache::Dict{Any, CuFunction}, src::Core.MethodInstance, cfg::GPUCompiler.CompilerConfig{GPUCompiler.PTXCompilerTarget, CUDA.CUDACompilerParams}, compiler::Function, linker::Function)
    @ GPUCompiler ~/.julia/packages/GPUCompiler/YO8Uj/src/execution.jl:103
 [16] macro expansion
    @ ~/.julia/packages/CUDA/tVtYo/src/compiler/execution.jl:318 [inlined]
 [17] macro expansion
    @ ./lock.jl:267 [inlined]
 [18] cufunction(f::GPUArrays.var"#broadcast_kernel#26", tt::Type{Tuple{CUDA.CuKernelContext, CuDeviceVector{Float32, 1}, Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Tuple{Base.OneTo{Int64}}, typeof(log1mexp), Tuple{Base.Broadcast.Extruded{CuDeviceVector{Float32, 1}, Tuple{Bool}, Tuple{Int64}}}}, Int64}}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ CUDA ~/.julia/packages/CUDA/tVtYo/src/compiler/execution.jl:313
 [19] cufunction
    @ ~/.julia/packages/CUDA/tVtYo/src/compiler/execution.jl:310 [inlined]
 [20] macro expansion
    @ ~/.julia/packages/CUDA/tVtYo/src/compiler/execution.jl:104 [inlined]
 [21] #launch_heuristic#1080
    @ ~/.julia/packages/CUDA/tVtYo/src/gpuarrays.jl:17 [inlined]
 [22] launch_heuristic
    @ ~/.julia/packages/CUDA/tVtYo/src/gpuarrays.jl:15 [inlined]
 [23] _copyto!
    @ ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:65 [inlined]
 [24] copyto!
    @ ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:46 [inlined]
 [25] copy
    @ ~/.julia/packages/GPUArrays/5XhED/src/host/broadcast.jl:37 [inlined]
 [26] materialize(bc::Base.Broadcast.Broadcasted{CUDA.CuArrayStyle{1}, Nothing, typeof(log1mexp), Tuple{CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}}})
    @ Base.Broadcast ./broadcast.jl:873
 [27] top-level scope
    @ REPL[22]:1
 [28] top-level scope
    @ ~/.julia/packages/CUDA/tVtYo/src/initialization.j

Simply changing the definition of log1mexp to the following fixes the issue:

log1mexp_cuda(x::T) where {T <: Real} = x < log(T(1)/2) ? log1p(-exp(x)) : log(-expm1(x))
julia> log1mexp_cuda.(CuVector([-1f0, -2f0, -3f0]))
3-element CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}:
 -0.4586752
 -0.14541346
 -0.051069178

Do we really need IrrationalConstants here?

`logabsgamma` doesn't work with ForwardDiff

ERROR: MethodError: no method matching _logabsgamma(::ForwardDiff.Dual{ForwardDiff.Tag{Optim.var"#135#136"{LiteHF.var"#54#55"{LiteHF.var"#52#53"{Tuple{LiteHF.FlatPrior{Int64}, Distributions.Normal{Float64}, Distributions.Normal{Float64}, LiteHF.RelaxedPoisson{Float64}, LiteHF.RelaxedPoisson{Float64}}}, LiteHF.var"#45#48"{LiteHF.var"#27#35"{Tuple{LiteHF.var"#40#44"{Tuple{Vector{Int64}, Vector{Int64}, Vector{Int64}}, Tuple{ExpCounts{}
, ExpCounts{}
, ExpCounts{}
}}}}, Tuple{JSON3.Array{Int64, Vector{UInt8}, SubArray{UInt64, 1, Vector{UInt64}, Tuple{UnitRange{Int64}}, true}}}, LiteHF.var"#f#47"}}}, Float64}, Float64, 5})
Closest candidates are:
  _logabsgamma(::Float64) at ~/.julia/packages/SpecialFunctions/CQMHW/src/gamma.jl:601
  _logabsgamma(::Float32) at ~/.julia/packages/SpecialFunctions/CQMHW/src/gamma.jl:606
  _logabsgamma(::Float16) at ~/.julia/packages/SpecialFunctions/CQMHW/src/gamma.jl:611
  ...
Stacktrace:
  [1] logabsgamma(x::ForwardDiff.Dual{ForwardDiff.Tag{Optim.var"#135#136"{LiteHF.var"#54#55"{LiteHF.var"#52#53"{Tuple{LiteHF.FlatPrior{Int64}, Distributions.Normal{Float64}, Distributions.Normal{Float64}, LiteHF.RelaxedPoisson{Float64}, LiteHF.RelaxedPoisson{Float64}}}, LiteHF.var"#45#48"{LiteHF.var"#27#35"{Tuple{LiteHF.var"#40#44"{Tuple{Vector{Int64}, Vector{Int64}, Vector{Int64}}, Tuple{ExpCounts{}
, ExpCounts{}
, ExpCounts{}
}}}}, Tuple{JSON3.Array{Int64, Vector{UInt8}, SubArray{UInt64, 1, Vector{UInt64}, Tuple{UnitRange{Int64}}, true}}}, LiteHF.var"#f#47"}}}, Float64}, Float64, 5})
    @ SpecialFunctions ~/.julia/packages/SpecialFunctions/CQMHW/src/gamma.jl:599

should this be raised in SpecialFunctions instead?

Some tests failing on macOS

Running the test suite on macOS I get:

log2mexp: Test Failed at ~/Julia/Forks/LogExpFunctions.jl/test/basicfuns.jl:173
  Expression: log2mexp(-(T(1))) ≈ log(2 - exp(-(T(1))))
   Evaluated: 0.23447205f0 ≈ 0.48988014f0

Tests pass on Ubuntu (which the only platform currently covered by CI).

Support for `Num` from `Symbolics.jl`

At the moment, logsumexp and related functions do not work with symbolic computations implemented using Symbolics.jl:

using LogExpFunctions
using Symbolics
@variables xs[1:2]
logsumexp(xs)

# ERROR: TypeError: non-boolean (Num) used in boolean context
# Stacktrace:
#  [1] _logsumexp_onepass(X::Symbolics.Arr{Num, 1})
#    @ LogExpFunctions ~/.julia/packages/LogExpFunctions/Vfuhq/src/logsumexp.jl:62
#  [2] _logsumexp
#    @ ~/.julia/packages/LogExpFunctions/Vfuhq/src/logsumexp.jl:52 [inlined]
#  [3] #logsumexp#5
#    @ ~/.julia/packages/LogExpFunctions/Vfuhq/src/logsumexp.jl:30 [inlined]
#  [4] logsumexp(X::Symbolics.Arr{Num, 1})
#    @ LogExpFunctions ~/.julia/packages/LogExpFunctions/Vfuhq/src/logsumexp.jl:30
#  [5] top-level scope
#    @ ~/PlantingSpace/learning/src/tmp_daniel.jl:17

A naive implementation of logsumexp makes it work as expected:

LogExpFunctions.logsumexp(xs::AbstractVector{Num}) = log(sum(exp, xs))
logsumexp(xs)
# log(Symbolics._mapreduce(exp, +, xs, Colon(), (:init => false,)))

substitute(logsumexp(xs), Dict(xs => log.([0.5, 0.5]))) == 0
# true

It would be great if Symbolics.Num could be supported property as an extra dependency.

[FR] smoothed log sum implementation

thanks for this cool little package. I have a feature request/question. In discrete choice models (for example in this paper equation 14 ) we often have a smoothed version of the log sum function. that is, instead of

alpha + log( sum (exp( x - alpha ) ) )

we'd have

alpha + σ log( sum (exp( (x - alpha)/σ ) ) )

I was trying to think how to add this to the package (maybe in my own fork if this turns out non of interest here), but I'm not totally sure where. My best guess would have been to do the division by sigma in places like here, but not totally certain. thanks for any hints!

`logabssinh`

LogExpFunctions already has logcosh. normally, when those terms appear, there is also log(abs(sinh)) terms. In my specific field, that term appear in natural-gas properties:
image

logsinh can be expressed in terms of LogExpFunctions.log1mexp:

function logabssinh(x::Real)
    abs_x = abs(x)
    return abs_x + log1mexp(- 2 * abs_x) - IrrationalConstants.logtwo
end

Is it ok to have this function here?

xexpx(z)

I frequently encounter the function xexpx(z) = z * exp(z), with the caveat that xexpx(-Inf) == 0. Thus xexpx(z) = xlogx(exp(z)), but computing it directly is faster because it avoids a superfluous log call and might prevent intermediate overflows..

Can we consider adding the definition:

xexpx(x::Real) = x == -Inf ? exp(x) : x * exp(x)

`log1pmx` and `logmxp1` cannot be differentiated by `ForwardDiff`

using LogExpFunctions

using ForwardDiff
using Zygote
using Test

log1mexpm(x) = log1mexp(-x)
log2mexpm(x) = log2mexp(-x)

function test_ad_vs_zygote(AD)
    @testset verbose = true "LogExpFunctions" begin
        @testset "Single-argument functions" begin
            @testset "$f" for f in (                      
                log1pmx,
                logexpm1,
                logsumexp,
                xexpx,
                log1psq,
                logistic,
                invsoftplus,
                log2mexpm,
                logit,
                log1mexpm,
                logmxp1,
                xlogx,
                log1pexp,
                logcosh
            )
                for _ in 1:100
                    par = rand()
                    @test AD.derivative(f, par)  only(only(Zygote.gradient(f  only, [par])))
                end
            end
        end
        @testset "Two-argument functions" begin
            @testset "$f" for f in (                      
                xexpy,
                xlogy,
                xlog1py,
                logaddexp,
                logsubexp,
            )
                for _ in 1:100
                    par = rand(2)
                    @test AD.gradient(x -> f(x...), par)  only(Zygote.gradient(x -> f(x...), par))
                end
            end
        end
        @testset "Vector-argument functions" begin
                @testset "$f" for f in (                      
                softmax,
            )
                for _ in 1:100
                    par = rand(3)
                    @test AD.jacobian(f, par)  only(Zygote.jacobian(f, par))
                end
            end
        end
    end
    return nothing
end

test_ad_vs_zygote(ForwardDiff)

yields

Test Summary:               | Pass  Error  Total
LogExpFunctions             | 1800    200   2000
  Single-argument functions | 1200    200   1400
    log1pmx                 |         100    100
    logexpm1                |  100           100
    logsumexp               |  100           100
    xexpx                   |  100           100
    log1psq                 |  100           100
    logistic                |  100           100
    logexpm1                |  100           100
    log2mexpm               |  100           100
    logit                   |  100           100
    log1mexpm               |  100           100
    logmxp1                 |         100    100
    xlogx                   |  100           100
    log1pexp                |  100           100
    logcosh                 |  100           100
  Two-argument functions    |  500           500
  Vector-argument functions |  100           100
ERROR: Some tests did not pass: 1800 passed, 0 failed, 200 errored, 0 broken.

log1pexp(x) when x < -37

I was reading R's implementation of log1pexp and they include one additional branch (see https://cran.r-project.org/web/packages/Rmpfr/vignettes/log1mexp-note.pdf, equation 10).

Currently StatsFuns does this:

https://github.com/JuliaStats/StatsFuns.jl/blob/ae40bc1c6ee63a5624ba6cd168da5bf43f473e59/src/basicfuns.jl#L66

Whereas including the additional branch it would be:

log1pexp_v2(x::AbstractFloat) = x  -37. ? exp(x) : x  18. ? log1p(exp(x)) : x  33.3 ? x + exp(-x) : x

Apparently including the x < -37. check reduces the function's computation time by 25% (because it saves the call to log1p I guess). But I am no expert doing this kind of performance tests so please can someone else more knowledgeable check this? Here is the test I did:

using BenchmarkTools
@btime for x = -100. : 1. : 100., _ = 1 : 100
    log1pexp_v2(x)
end   #  310.471 μs (0 allocations: 0 bytes)
@btime for x = -100. : 1. : 100., _ = 1 : 100
    log1pexp(x)
end  #   423.671 μs (0 allocations: 0 bytes)

Both log1pexp_v2 and log1pexp give numerically identical results.

If this is worth it I can prepare a PR.

Update: Actually it seems much of the time is due to the oftype(exp(-x), x) bit in the final branch. I think specializing on AbstractFloat would take care of that. Even after this change the x ≤ -37. check is faster.

clarify ChainRulesCore.jl integration

The README should make it clear that

  1. all functions have working forward- and reverse rules, implemented via ChainRulesCore.jl

  2. pull requests should implement and test these, too

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Citation?

I've made extensive use of your package in a paper and I would like to include a citation. Please add a CITATION.cff or CITATION.bib file.

Register?

Can we consider registering this package?

Error during loading of extension

Loading LogExpFunctions and one of its weakdeps in a clean env on 1.9-beta4:

julia> using LogExpFunctions, InverseFunctions
 │ Packages [LogExpFunctions, InverseFunctions] not found, but
 │ packages named [LogExpFunctions, InverseFunctions] are
 │ available from a registry. 
 │ Install packages?
 │   (@v1.9) pkg> add LogExpFunctions InverseFunctions 
 └ Select environment:
 > 1: `/tmp/jl_PYeWrG` (/tmp/jl_PYeWrG)
   2: `~/.julia/environments/v1.9/Project.toml` (@v#.#)
   Resolving package versions...
    Updating `/tmp/jl_PYeWrG/Project.toml`
  [3587e190] + InverseFunctions v0.1.8
  [2ab3a3ac] + LogExpFunctions v0.3.22
    Updating `/tmp/jl_PYeWrG/Manifest.toml`
  [ffbed154] + DocStringExtensions v0.9.3
  [3587e190] + InverseFunctions v0.1.8
  [92d709cd] + IrrationalConstants v0.1.1
  [2ab3a3ac] + LogExpFunctions v0.3.22
  [56f22d72] + Artifacts
  [2a0f44e3] + Base64
  [b77e0a4c] + InteractiveUtils
  [76f85450] + LibGit2
  [8f399da3] + Libdl
  [37e2e46d] + LinearAlgebra
  [56ddb016] + Logging
  [d6f4376e] + Markdown
  [ca575930] + NetworkOptions v1.2.0
  [de0858da] + Printf
  [9a3f8284] + Random
  [ea8e919c] + SHA v0.7.0
  [9e88b42a] + Serialization
  [8dfed614] + Test
  [4ec0a83e] + Unicode
  [e66e0078] + CompilerSupportLibraries_jll v1.0.2+0
  [4536629a] + OpenBLAS_jll v0.3.21+0
  [8e850b90] + libblastrampoline_jll v5.4.0+0
┌ Error: Error during loading of extension InverseFunctionsExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
└ @ Base loading.jl:1191

1.9-beta4 Extension errors:

I'm not sure if this is a problem with LogExpFunctions or with extensions, but on the 1.9-beta4:

│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
│  ┌ Error: Error during loading of extension InverseFunctionsExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ LogExpFunctionsChainRulesCoreExt [1bf5f11d-9a0a-5d25-85d0-d1d9a28a239c]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ CairoMakie [13f3f980-e62b-5c42-98c6-ff1f3baf88f0]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
│  ┌ Error: Error during loading of extension InverseFunctionsExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ ImageAxes [2803e5a7-5153-5ecf-9a86-9b4c37f5f5ac]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ Distributions [31c24e10-a181-5473-b8eb-7969acd0382f]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
│  ┌ Error: Error during loading of extension InverseFunctionsExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ ForwardDiff [f6369f11-7733-5829-9624-2563aa707210]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ PlotUtils [995b91a9-d308-5afd-9ec6-746e21dbc043]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ DiffEqBaseMeasurementsExt [f030c83e-7d7f-544f-a855-333e4a237e34]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ IntervalRootFinding [d2bf35a9-74e0-55ec-b149-d360ff49b807]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ FreeTypeAbstraction [663a7486-cb36-511b-a19d-713bb74d65c9]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ RecurrenceAnalysis [639c3291-70d9-5ea2-8c5b-839eba1ee399]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
│  ┌ Error: Error during loading of extension InverseFunctionsExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ SpecialFunctions [276daf66-3868-5448-9aa4-cd146d93841b]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ GLM [38e38edf-8417-5370-95a0-9cbb8c7f171a]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
│  ┌ Error: Error during loading of extension InverseFunctionsExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ SymbolicUtils [d1185830-fcd6-423d-90d6-eec64667417b]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ Entropies [ed8fcbec-b94c-44b6-89df-898894ad9591]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
│  ┌ Error: Error during loading of extension InverseFunctionsExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ ImageBase [c817782e-172a-44cc-b673-b171935fbb9e]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ StatsModels [3eaba693-59b7-5ba5-a881-562e759f1c8d]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
│  ┌ Error: Error during loading of extension InverseFunctionsExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ DualNumbers [fa6b7ba4-c1ee-5f82-b5fc-ecf0adba8f74]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ Sixel [45858cf5-a6b0-47a3-bbea-62219f50df47]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ DynamicalSystemsBase [6e36e845-645a-534a-86f2-f5d4aa5a06b4]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
│  ┌ Error: Error during loading of extension InverseFunctionsExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ DiffEqBase [2b5f629d-d688-5b77-993f-72d75c75574e]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ PNGFiles [f57f5aa1-a3ce-4bc8-8ab9-96f992907883]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ MathTeXEngine [0a4f8689-d25c-4efe-a92b-7142dfc1aa53]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ DSP [717857b8-e6f2-59f4-9121-6e50c889abd2]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ ImageMetadata [bc367c6b-8a6b-528e-b4bd-a4b897500b49]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ ColorVectorSpace [c3611d14-8923-5661-9e6a-0046d554d3a4]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  
┌ LombScargle [fc60dff9-86e7-5f2f-a8a0-edeadbb75bd9]
│  ┌ Error: Error during loading of extension ChainRulesCoreExt of LogExpFunctions, use `Base.retry_load_extensions()` to retry.
│  └ @ Base loading.jl:1191
└  

logcosh

Currently logcosh(x) = log(cosh(x)) is defined in NNlib like this:

logcosh(x) = x + softplus(-2x) - oftype(float(x), log(2))

It probably makes sense to have this function here, since it is a "log-exp-function" ...

Add logmeanexp

Hi,

I noticed that R and Python have logmeanexp. I think this would be a simple, yet useful addition. I would be willing to make a PR. The basic function would be

function logmeanexp(X::AbstractArray{<:Number})
    return logsumexp(X) - log(length(X))
end

Some additional logic would be needed to handled the dims keyword.

Softmax segfaults with offset arrays

Due to improper inbounds annotations, softmax segfaults when called on an OffsetArray:

yurivish@Compy ~ % julia
               _
   _       _ _(_)_     |  Documentation: https://docs.julialang.org
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.5.3 (2020-11-09)
 _/ |\__'_|_|_|\__'_|  |  Official https://julialang.org/ release
|__/                   |

julia> using StatsFuns, OffsetArrays

julia> softmax(OffsetArray(collect(1:11), -5:5))
zsh: segmentation fault  julia
yurivish@Compy ~ %

https://github.com/JuliaStats/StatsFuns.jl/blob/8dfda2c0ee33d5f85eca5c039d31d85c90f363f2/src/basicfuns.jl#L330-L342

julia> versioninfo()
Julia Version 1.5.3
Commit 788b2c77c1 (2020-11-09 13:37 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin18.7.0)
  CPU: Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-9.0.1 (ORCJIT, skylake)

logsumexp(::SVector) allocates

logsumexp allocates for static arrays, while for eg Vector it doesn't.

julia> VERSION
v"1.7.2-pre.0"

(@v1.7) pkg> st LogExpFunctions
      Status `~/.julia/environments/v1.7/Project.toml`
  [2ab3a3ac] LogExpFunctions v0.3.6

(@v1.7) pkg> st StaticArrays
      Status `~/.julia/environments/v1.7/Project.toml`
  [90137ffa] StaticArrays v1.3.1

julia> using StaticArrays, LogExpFunctions, BenchmarkTools

julia> x = randn(10)
10-element Vector{Float64}:
 -1.6157998430573786
  1.0318728667047257
  0.26968254689646015
  0.4903347054466403
 -1.117751634134204
 -0.4744921716401031
  0.9685578891398771
 -0.5060390401590071
  0.5209797766678386
 -0.864777496851762

julia> s = SVector(x...);

julia> @btime logsumexp($x)
  93.435 ns (0 allocations: 0 bytes)
2.5045882285138097

julia> @btime logsumexp($s)
  149.875 ns (4 allocations: 256 bytes)
2.5045882285138097

Weighted logsumexp

Let w, x be vectors. I'm interested in computing log(dot(w, exp.(x)). The current logsumexp functions can be used to compute this, but one must take log.(w)which is troublesome when w can be very small or zero. I took a look at the source and I think actually it should be straightforward to extend the current code. The end goal for this would be to implement x -> log.(K*exp.(x)) where K is a matrix and x is a vector.

What I'm not so familiar with is how to implement the reduction in Julia in an efficient manner. At a high level, the first thing that comes to mind for me is to do something like a binary operation of the type x, (y, w) -> log(exp(x) + w*exp(y)) and do a reduction over pairs of data and weights. But I'd appreciate some input on how this should be best done.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.