Giter Site home page Giter Site logo

juliafolds / foldscuda.jl Goto Github PK

View Code? Open in Web Editor NEW
54.0 7.0 4.0 1.33 MB

Data-parallelism on CUDA using Transducers.jl and for loops (FLoops.jl)

License: MIT License

Julia 98.13% Makefile 1.87%
gpu cuda julia transducers parallel high-performance map-reduce iterators

foldscuda.jl's Introduction

FoldsCUDA

Dev Buildkite status Run tests w/o GPU

FoldsCUDA.jl provides Transducers.jl-compatible fold (reduce) implemented using CUDA.jl. This brings the transducers and reducing function combinators implemented in Transducers.jl to GPU. Furthermore, using FLoops.jl, you can write parallel for loops that run on GPU.

API

FoldsCUDA exports CUDAEx, a parallel loop executor. It can be used with the parallel for loop created with FLoops.@floop, Base-like high-level parallel API in Folds.jl, and extensible transducers provided by Transducers.jl.

Examples

findmax using FLoops.jl

You can pass CUDA executor FoldsCUDA.CUDAEx() to @floop to run a parallel for loop on GPU:

julia> using FoldsCUDA, CUDA, FLoops

julia> using GPUArrays: @allowscalar

julia> xs = CUDA.rand(10^8);

julia> @allowscalar xs[100] = 2;

julia> @allowscalar xs[200] = 2;

julia> @floop CUDAEx() for (x, i) in zip(xs, eachindex(xs))
           @reduce() do (imax = -1; i), (xmax = -Inf32; x)
               if xmax < x
                   xmax = x
                   imax = i
               end
           end
       end

julia> xmax
2.0f0

julia> imax  # the *first* position for the largest value
100

extrema using Transducers.TeeRF

julia> using Transducers, Folds

julia> @allowscalar xs[300] = -0.5;

julia> Folds.reduce(TeeRF(min, max), xs, CUDAEx())
(-0.5f0, 2.0f0)

julia> Folds.reduce(TeeRF(min, max), (2x for x in xs), CUDAEx())  # iterator comprehension works
(-1.0f0, 4.0f0)

julia> Folds.reduce(TeeRF(min, max), Map(x -> 2x)(xs), CUDAEx())  # equivalent, using a transducer
(-1.0f0, 4.0f0)

More examples

For more examples, see the examples section in the documentation.

foldscuda.jl's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

foldscuda.jl's Issues

Function call and floop simultaneously

I attached an example below. The example works great, but if I want to modify x in the function call, then the function doesn't work. There are real speed gains from combining floop and CUDAEx() compared to other options and I want to be able to exploit them but also modify x within the function. Is that possible?

### Packages
using CUDA, FLoops, BenchmarkTools, FoldsCUDA

### User Inputs
nvec=1000000
M= 50
x = CuArray(rand(Float32, (M, nvec)))

### Function Set up
function parallel_multi(f, x)
   @floop CUDAEx() for i in 1:size(x, 2)
        val = reduce(*,@view(x[:,i])) #works
        #val = reduce(*, @view(x[:,i].^2)) #doesn't work
     #val = reduce(*, x[:,i].^2) #doesn't work
        f[i] = val 
    end
    return f
end

result = CUDA.ones(Float32, (size(x,2),1))

### Comparing speeds
display(@benchmark parallel_multi(result, $x))
display(@benchmark reduce(*, $x, dims = 1))
display(@benchmark prod($x, dims=1)) #identical to above 

'''

error if reducing over empty collection

MWE:

julia> using FoldsCUDA, Folds

julia> Folds.foreach(identity, 1:0, CUDAEx())
ERROR: DomainError with 0:
`x` must be positive.
Stacktrace:
  [1] nextpow(a::Int64, x::Int64)
    @ Base ./intfuncs.jl:457
  [2] _transduce!(buf::Nothing, rf::Transducers.Reduction{Transducers.MapSplat{typeof(identity)}, Transducers.BottomRF{typeof(Folds.Implementations.return_nothing)}}, init::Nothing, arrays::UnitRange{Int64})
    @ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:123
  [3] transduce_impl(rf::Transducers.Reduction{Transducers.MapSplat{typeof(identity)}, Transducers.BottomRF{typeof(Folds.Implementations.return_nothing)}}, init::Nothing, arrays::UnitRange{Int64})
    @ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:32
  [4] _transduce_cuda(op::Function, init::Nothing, xs::Base.Iterators.Zip{Tuple{UnitRange{Int64}}})
    @ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:18
  [5] #_transduce_cuda#5
    @ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:1 [inlined]
  [6] _transduce_cuda
    @ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:1 [inlined]
  [7] transduce(xf::Transducers.MapSplat{typeof(identity)}, rf::typeof(Folds.Implementations.return_nothing), init::Nothing, xs::Base.Iterators.Zip{Tuple{UnitRange{Int64}}}, exc::CUDAEx{NamedTuple{(), Tuple{}}})
    @ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/api.jl:45
  [8] foreach(f::Function, itr::UnitRange{Int64}, itrs::CUDAEx{NamedTuple{(), Tuple{}}}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ Folds.Implementations ~/.julia/packages/Folds/ZayPF/src/reduce.jl:89
  [9] foreach(f::Function, itr::UnitRange{Int64}, itrs::CUDAEx{NamedTuple{(), Tuple{}}})
    @ Folds.Implementations ~/.julia/packages/Folds/ZayPF/src/reduce.jl:77
 [10] top-level scope
    @ REPL[9]:1
 [11] top-level scope
    @ ~/.julia/packages/CUDA/Ey3w2/src/initialization.jl:52

Seems like

wanted_threads = nextpow(2, n)
just needs a special case for n == 0

`Folds.map` with `CuArray`s not working

julia> Folds.map(x -> x + 1, cu([1,2,3]), CUDAEx())
ERROR: FoldsCUDA.FailedInference: Kernel is inferred to return invalid type: BangBang.SafeCollector{Vector{Int64}}
HINT: if this exception is caught as `err``, use `CUDA.code_typed(err)` to introspect the erronous code.
Stacktrace:
  [1] _infer_acctype(rf::Function, init::BangBang.SafeCollector{BangBang.NoBang.Empty{Vector{Union{}}}}, arrays::Tuple{CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}}, include_init::Bool)
    @ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:112
  [2] _infer_acctype
    @ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:97 [inlined]
  [3] _transduce!(buf::Nothing, rf::Transducers.Reduction{Transducers.Map{typeof(first)}, Transducers.Reduction{Transducers.Map{var"#21#22"}, Transducers.Reduction{Transducers.Map{Type{BangBang.NoBang.SingletonVector}}, Transducers.BottomRF{Transducers.AdHocRF{typeof(BangBang.collector), typeof(identity), typeof(BangBang.append!!), typeof(identity), typeof(identity), Nothing}}}}}, init::BangBang.SafeCollector{BangBang.NoBang.Empty{Vector{Union{}}}}, arrays::CuArray{Int64, 1, CUDA.Mem.DeviceBuffer})
    @ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:128
  [4] transduce_impl(rf::Transducers.Reduction{Transducers.Map{typeof(first)}, Transducers.Reduction{Transducers.Map{var"#21#22"}, Transducers.Reduction{Transducers.Map{Type{BangBang.NoBang.SingletonVector}}, Transducers.BottomRF{Transducers.AdHocRF{typeof(BangBang.collector), typeof(identity), typeof(BangBang.append!!), typeof(identity), typeof(identity), Nothing}}}}}, init::BangBang.SafeCollector{BangBang.NoBang.Empty{Vector{Union{}}}}, arrays::CuArray{Int64, 1, CUDA.Mem.DeviceBuffer})
    @ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:32
  [5] _transduce_cuda(op::Function, init::BangBang.SafeCollector{BangBang.NoBang.Empty{Vector{Union{}}}}, xs::Transducers.Eduction{Transducers.Reduction{Transducers.Map{var"#21#22"}, Transducers.BottomRF{Transducers.Completing{typeof(BangBang.push!!)}}}, CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}})
    @ FoldsCUDA ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:18
  [6] #_transduce_cuda#5
    @ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:1 [inlined]
  [7] _transduce_cuda
    @ ~/.julia/packages/FoldsCUDA/Mo35m/src/kernels.jl:1 [inlined]
  [8] transduce
    @ ~/.julia/packages/FoldsCUDA/Mo35m/src/api.jl:45 [inlined]
  [9] collect(itr::Transducers.Eduction{Transducers.Reduction{Transducers.Map{var"#21#22"}, Transducers.BottomRF{Transducers.Completing{typeof(BangBang.push!!)}}}, CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}}, ex::CUDAEx{NamedTuple{(), Tuple{}}})
    @ Folds.Implementations ~/.julia/packages/Folds/ZayPF/src/collect.jl:4
 [10] map(f::Function, itr::CuArray{Int64, 1, CUDA.Mem.DeviceBuffer}, ex::CUDAEx{NamedTuple{(), Tuple{}}})
    @ Folds.Implementations ~/.julia/packages/Folds/ZayPF/src/collect.jl:84
 [11] top-level scope
    @ REPL[14]:1
 [12] top-level scope
    @ ~/.julia/packages/CUDA/BbliS/src/initialization.jl:52

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

Skip the non-GPU tests on Buildkite

Since the pool of JuliaGPU Buildkite runners is pretty small (right now we only have two GPU runners), we should make sure that we only run GPU tests on Buildkite.

We can detect if we are on Buildkite with:

haskey(ENV, "BUILDKITE")

If the BUILDKITE environment variable exists, we should skip the non-GPU tests.

cc: @tkf

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.