Giter Site home page Giter Site logo

tugrul512bit / cekirdekler Goto Github PK

View Code? Open in Web Editor NEW
93.0 15.0 9.0 10.9 MB

Multi-device OpenCL kernel load balancer and pipeliner API for C#. Uses shared-distributed memory model to keep GPUs updated fast while using same kernel on all devices(for simplicity).

License: GNU General Public License v3.0

C# 100.00%
opencl-kernels iterative load-balancer pipelining multi-device gpgpu multi-gpu zero-copy gpu-computing gpu-acceleration

cekirdekler's Introduction

Cekirdekler

C# Multi-device GPGPU(OpenCL) compute API with an iterative interdevice-loadbalancing feature using multiple pipelining on read/write/compute operations for developers' custom opencl kernels. Main idea is to treat N devices as a single device when possible, taking advantage of entire platform, easily, through shared-distributed memory model under the hood.

64-bit only. "project settings -> build -> platform target -> x64" Also configuration manager needs to look like this:

Needs extra C++ dll built in 64-bit(x86_64) from https://github.com/tugrul512bit/CekirdeklerCPP which must be named KutuphaneCL.dll

The other needed dll is Microsoft's System.Threading.dll and its xml helper for .Net 2.0 - or - you can adjust "using" and use .Net 3.5+ for your own project and don't need System.Threading.dll.

In total, Cekirdekler.dll and KutuphaneCL.dll and using .Net 3.5 should be enough.

Usage: add only Cekirdekler.dll and system.threading.dll as references to your C# projects. Other files needs to exist in same folder with Cekirdekler.dll or the executable of main project.

This project is being enhanced using ZenHub:

Features

  • Implicit multi device control: from CPUs to any number of GPUs and ACCelerators. Explicit in library-side for compatibility and performance, implicit for client-coder for the ease of GPGPU to concentrate on opencl kernel code. Selection of devices can be done implicitly or explicitly to achieve ease-of-setup or detailed device query. Handling(computing things) of devices are implicit, selection can be both implicit or explicit. Explicitly chosen multiple devices can be added together with a simple + operator.
  • Iterative load balancing between devices: uniquely done for each different compute(explicit control with user-given compute-id). Multiple devices get more and more fair work loads until the ratio of work distribution converges to some point. Partitionig workload completes a kernel with less latency which is applicable for hot-spot loops and some simple embarrassingly-parallel algorithms. Even better for streaming data with pipelining option enabled.
  • Pipelining for reads, computes and writes(host - device link): either by the mercy of device drivers or explicit event-based queue management. Hides the latency of least time consuming part(such as writes) behind the most time consuming part(such as compute). GPUs can run buffer copies and opencl kernels concurrently.
  • Pipelining between devices(device - host - device): Concurrently run multiple stages to overlap them in timeline and gain advantage of multiple GPUs(and FPGAa, CPUs) for even non-separable(because of atomics and low-level optimizations) kernels of a time-consuming pipeline. Each device runs a different kernel but at the same time with other devices and uses double buffers to overlap even data movements between pipeline stages.
  • Batch computing using task pools and device pools: Use every async pipeline of every gpu in system, for a pool of non-separable kernels(as tasks to compute later). Uses greedy scheduling algorithm to keep all GPUs busy.
  • Working with different numeric arrays: Either C#-arrays like float[], int[], byte[],... or C++-array wrappers like ClFloatArray, ClArray<float>, ClByteArray, ClArray<byte>
  • Automatic buffer copy optimizations for devices: If a device shares RAM with CPU, it uses map/unmap commands to reduce number of array copies(instead of read/write). If also that device is given a C++ wrapper array(such as ClArray<float>), it also uses cl_use_host_ptr flag on buffer for a zero-copy access aka" streaming". By default, all devices have their own buffers.
  • Two different usage types: First one lets the developer choose all kernel parameters as arrays more explicitly for a more explicitly readable execution, second one creates same thing using a much shorter definition to complete in less code lines and change only the necessary flags instead of all.
  • Automatic resource dispose: When C++ array wrappers are finalized(out-of-scope, garbage collected), they release resources. Also dispose method can be called explicitly by developer.
  • Uses OpenCL 1.2: C++ bindings from Khronos.org for its base. Developers are expected to know C99 and its OpenCL kernel constraints to write their own genuine GPGPU kernels. CekirdeklerCPP project produces OpenCL 1.2 backend dll file.
  • Uses OpenCL 2.0: C++ bindings from Khronos.org for its base. Developers are expected to know C99 and its OpenCL kernel constraints to write their own genuine GPGPU kernels. CekirdeklerCPP2 project produces OpenCL 2.0 backend dll file.(needs to be renamed to KutuphaneCL.dll)

Documentation

You can see details and tutorial here in Cekirdekler-wiki

Known Issues

  • For C++ array wrappers like Array<float> there is no out-of-bounds-check, don't cross boundaries when accessing array indexing.
  • Don't use C++ array wrappers after they are disposed. These features are not added to speed-up array indexing.
  • Don't use ClNumberCruncher or Core instances after they are disposed.
  • Pay attention to "number of array elements used" per workitem in kernel and how they are given as parameters from API compute() method.
  • Pay attenton to "partial read"/"read"/"write" array copy modifiers when your kernel is altering(or reading) whole array or just a part of it.
  • No performance output at first iteration. Load balancer needs at least several iterations to distribute fairly and performance report needs at least 2 iterations for console output.

Example that computes 1000 workitems accross all GPUs in a PC: GPU1 computes global id range from 0 to M, GPU2 computes from M+1 to K and GPU_N computes for global id range of Y to Z

        Cekirdekler.ClNumberCruncher cr = new Cekirdekler.ClNumberCruncher(
            Cekirdekler.AcceleratorType.GPU, @"
                __kernel void hello(__global char * arr)
                {
                    int threadId=get_global_id(0);
                    printf(""hello world"");
                }
            ");

        Cekirdekler.ClArrays.ClArray<byte> array = new Cekirdekler.ClArrays.ClArray<byte>(1000);
        // Cekirdekler.ClArrays.ClArray<byte> array = new byte[1000]; // host arrays are usable too!
        array.compute(cr, 1, "hello", 1000, 100); 
        // local id range is 100 here. so this example spawns 10x workgroups and all GPUs share them like GPU1 computes 2 groups,
        // GPU2 computes 5 groups and another GPU computes 3 groups. Global id values are continuous through all global workitems,
        // local id values are also safe to use. 
        // faster GPUs get more work share over iterations. Performance aware over repeatations of a work.
        
        // no need to dispose anything at the end. they do it themselves when out of scope or gc.  

cekirdekler's People

Contributors

tugrul512bit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cekirdekler's Issues

Sequential kernel executions in same `compute()` method

array.compute(cruncher, 1, "kernel1 kernel2 kernel3", globalSize, localSize)

here all kernels listed in parameter are run with same globalSize and localSize. globalsize and localSize should support multiple values. Overloading compute with an array/list parameter maybe.

Explicit device selection

Can be useful when developer doesn't need all GPUs at once in OpenCL. Maybe something like a device list in different categories:

  ClDevice.getGpuList()                                        
  ClDevice.getAccList()                                          // random order with device name so user can choose 
  ClDevice.getDeviceWithMaxComputeUnit()    // 20 thread CPU is not same as a 20CU -  HD7870 !!! 
  ClDevice.getDeviceWithBenchmark("nbody"); // gets top point awarded device
  ClDevice.activateDynamicDeviceSwitching() // switches to another device when performance becomes too much oscillated (GTX_titan 1ms 3ms 2ms 3ms 1ms then switches to gtx_950 10ms 11ms 10ms 9ms)

Error handling for every single opencl command.

Maybe less performance but more description when something bad happens. There is already a Test class for testing implementation but developer faults need to be taken care of.

For now, it only tells opencl kernel compiling errors such as "float5 is not defined" and similar.

  • added error-returning function call error handing.
  • need to add buffer creation or buffer mapping error handling(from parameter, not returned value)

Disposing unused buffers with warning message

api is creating a new buffer for each unique array given as parameter, with enough arrays, it could give out of resources.

  • LRU cache to hold max=N buffers(regardless of individual sizes) with total size constraint
    (default = RAM / 2 ? )
  • save data to disk when disposed, read from disk when re-created

Arrays: bounds check before compute.

just like workitems but with "elementsPerWorkItem" value taken into consideration against total work size and array size.

arrays will be able to bigger, but will not be let smaller than used range.

No offline compiler

Adding a clCreateProgramFromBinary() might be useful for FPGA owners. FPGAs may take hours to compile a single kernel while a gaming GPU can do it in seconds.

Lazy compute

There is no lazy compute for now.

var compute1 = array1.queueCompute()
var compute2 = compute1.nextStep(array2.queueCompute()).compute()

can be useful with less synchronizations.

C++ array wrapper re-creating(and computing) in loop throws error(CL_INVALID_MEM_OBJECT) but works for prepared N-array of C++ arrays

Found the root: re-creating inside loop has a chance to get same pointer (C++ - OS memory management) so USE_HOST_PTR flagged buffer throws error because of using duplicate buffer objects with same pointer.

Todo: release buffer that is bound to hashcode of ClArray<T> that is being destructed, in the Cores/ClNumberCruncher


  • because C# generates probably same hash after some iterations, makes API use same opencl buffer with USE_HOST_PTR flag and that has old/deleted array pointer, needs to re-check for USE_HOST_PTR type buffers whenever accessed.
    or

  • Parallel.For and buffer read/write(or workers[i].kernelArgument) gets overlapped(or even out of bounds) addressings that throw AggregateException_ctor_DefaultMessage error + System.AccessViolationException

  • no problem for C# arrays

  • probably from the USE_HOST_PTR buffer allocation failure which is not yet error-checked yet.

  • or, it is opencl implementation bugging when deleting a pointer while that pointer is still in an opencl buffer as CL_MEM_USE_HOST_PTR

Explicit Pipelining

pipeline1.push(a.nextParam(b).read()).push(c.compute()).push(d.write()).finish()

pipeline1.overlap(pipeline2,pipeline3).finish()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.