juliafolds / foldscuda.jl Goto Github PK
View Code? Open in Web Editor NEWData-parallelism on CUDA using Transducers.jl and for loops (FLoops.jl)
License: MIT License
Data-parallelism on CUDA using Transducers.jl and for loops (FLoops.jl)
License: MIT License
Since the pool of JuliaGPU Buildkite runners is pretty small (right now we only have two GPU runners), we should make sure that we only run GPU tests on Buildkite.
We can detect if we are on Buildkite with:
haskey(ENV, "BUILDKITE")
If the BUILDKITE
environment variable exists, we should skip the non-GPU tests.
cc: @tkf
julia> Folds.map(x -> x + 1, cu([1,2,3]), CUDAEx())
ERROR: FoldsCUDA.FailedInference: Kernel is inferred to return invalid type: BangBang.SafeCollector{Vector{Int64}}
HINT: if this exception is caught as `err``, use `CUDA.code_typed(err)` to introspect the erronous code.
Stacktrace:
[1] _infer_acctype(rf::Function, init::BangBang.SafeCollector{BangBang.NoBang.Empty{Vector{Union{}}}}, arrays::Tuple{CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}}, include_init::Bool)
@ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:112
[2] _infer_acctype
@ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:97 [inlined]
[3] _transduce!(buf::Nothing, rf::Transducers.Reduction{Transducers.Map{typeof(first)}, Transducers.Reduction{Transducers.Map{var"#21#22"}, Transducers.Reduction{Transducers.Map{Type{BangBang.NoBang.SingletonVector}}, Transducers.BottomRF{Transducers.AdHocRF{typeof(BangBang.collector), typeof(identity), typeof(BangBang.append!!), typeof(identity), typeof(identity), Nothing}}}}}, init::BangBang.SafeCollector{BangBang.NoBang.Empty{Vector{Union{}}}}, arrays::CuArray{Int64, 1, CUDA.Mem.DeviceBuffer})
@ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:128
[4] transduce_impl(rf::Transducers.Reduction{Transducers.Map{typeof(first)}, Transducers.Reduction{Transducers.Map{var"#21#22"}, Transducers.Reduction{Transducers.Map{Type{BangBang.NoBang.SingletonVector}}, Transducers.BottomRF{Transducers.AdHocRF{typeof(BangBang.collector), typeof(identity), typeof(BangBang.append!!), typeof(identity), typeof(identity), Nothing}}}}}, init::BangBang.SafeCollector{BangBang.NoBang.Empty{Vector{Union{}}}}, arrays::CuArray{Int64, 1, CUDA.Mem.DeviceBuffer})
@ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:32
[5] _transduce_cuda(op::Function, init::BangBang.SafeCollector{BangBang.NoBang.Empty{Vector{Union{}}}}, xs::Transducers.Eduction{Transducers.Reduction{Transducers.Map{var"#21#22"}, Transducers.BottomRF{Transducers.Completing{typeof(BangBang.push!!)}}}, CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}})
@ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:18
[6] #_transduce_cuda#5
@ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:1 [inlined]
[7] _transduce_cuda
@ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:1 [inlined]
[8] transduce
@ ~/.julia/packages/FoldsCUDA/Mo35m/src/api.jl:45 [inlined]
[9] collect(itr::Transducers.Eduction{Transducers.Reduction{Transducers.Map{var"#21#22"}, Transducers.BottomRF{Transducers.Completing{typeof(BangBang.push!!)}}}, CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}}, ex::CUDAEx{NamedTuple{(), Tuple{}}})
@ Folds.Implementations ~/.julia/packages/Folds/ZayPF/src/collect.jl:4
[10] map(f::Function, itr::CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, ex::CUDAEx{NamedTuple{(), Tuple{}}})
@ Folds.Implementations ~/.julia/packages/Folds/ZayPF/src/collect.jl:84
[11] top-level scope
@ REPL[14]:1
[12] top-level scope
@ ~/.julia/packages/CUDA/BbliS/src/initialization.jl:52
MWE:
julia> using FoldsCUDA, Folds
julia> Folds.foreach(identity, 1:0, CUDAEx())
ERROR: DomainError with 0:
`x` must be positive.
Stacktrace:
[1] nextpow(a::Int64, x::Int64)
@ Base ./intfuncs.jl:457
[2] _transduce!(buf::Nothing, rf::Transducers.Reduction{Transducers.MapSplat{typeof(identity)}, Transducers.BottomRF{typeof(Folds.Implementations.return_nothing)}}, init::Nothing, arrays::UnitRange{Int64})
@ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:123
[3] transduce_impl(rf::Transducers.Reduction{Transducers.MapSplat{typeof(identity)}, Transducers.BottomRF{typeof(Folds.Implementations.return_nothing)}}, init::Nothing, arrays::UnitRange{Int64})
@ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:32
[4] _transduce_cuda(op::Function, init::Nothing, xs::Base.Iterators.Zip{Tuple{UnitRange{Int64}}})
@ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:18
[5] #_transduce_cuda#5
@ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:1 [inlined]
[6] _transduce_cuda
@ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:1 [inlined]
[7] transduce(xf::Transducers.MapSplat{typeof(identity)}, rf::typeof(Folds.Implementations.return_nothing), init::Nothing, xs::Base.Iterators.Zip{Tuple{UnitRange{Int64}}}, exc::CUDAEx{NamedTuple{(), Tuple{}}})
@ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/api.jl:45
[8] foreach(f::Function, itr::UnitRange{Int64}, itrs::CUDAEx{NamedTuple{(), Tuple{}}}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ Folds.Implementations ~/.julia/packages/Folds/ZayPF/src/reduce.jl:89
[9] foreach(f::Function, itr::UnitRange{Int64}, itrs::CUDAEx{NamedTuple{(), Tuple{}}})
@ Folds.Implementations ~/.julia/packages/Folds/ZayPF/src/reduce.jl:77
[10] top-level scope
@ REPL[9]:1
[11] top-level scope
@ ~/.julia/packages/CUDA/Ey3w2/src/initialization.jl:52
Seems like
Line 124 in bc96774
n == 0
I attached an example below. The example works great, but if I want to modify x in the function call, then the function doesn't work. There are real speed gains from combining floop and CUDAEx() compared to other options and I want to be able to exploit them but also modify x within the function. Is that possible?
### Packages
using CUDA, FLoops, BenchmarkTools, FoldsCUDA
### User Inputs
nvec=1000000
M= 50
x = CuArray(rand(Float32, (M, nvec)))
### Function Set up
function parallel_multi(f, x)
@floop CUDAEx() for i in 1:size(x, 2)
val = reduce(*,@view(x[:,i])) #works
#val = reduce(*, @view(x[:,i].^2)) #doesn't work
#val = reduce(*, x[:,i].^2) #doesn't work
f[i] = val
end
return f
end
result = CUDA.ones(Float32, (size(x,2),1))
### Comparing speeds
display(@benchmark parallel_multi(result, $x))
display(@benchmark reduce(*, $x, dims = 1))
display(@benchmark prod($x, dims=1)) #identical to above
'''
This issue is used to trigger TagBot; feel free to unsubscribe.
If you haven't already, you should update your TagBot.yml
to include issue comment triggers.
Please see this post on Discourse for instructions and more details.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.