Giter Site home page Giter Site logo

Comments (10)

srajama1 avatar srajama1 commented on May 25, 2024 1

@dholladay00 : Can we discuss this more in e-mails ? Include me in the e-mail chain with @kyungjoo-kim . This will help us plan for Kokkoskernels.

from kokkos-kernels.

crtrott avatar crtrott commented on May 25, 2024 1

include me as well

from kokkos-kernels.

mhoemmen avatar mhoemmen commented on May 25, 2024 1

@dholladay00 You're welcome to include me if you like.

from kokkos-kernels.

kyungjoo-kim avatar kyungjoo-kim commented on May 25, 2024

What is the problem sizes of interest ? When you mention team or thread level functor interface for dense linear algebra, you probably want to do some small or mid-range problem. Depending on the problem sizes, the implementation may different with using fast memory. Do you also need to solve same size problems or different problem sizes across teams ?

From my experience, team level interface is effective on GPUs but on KNL, MKL already provides good performance on almost all problem sizes (except for tiny problems dimensions are 3, 5 10 etc.).

Please let me know the application and workflow scenario. The advantage of using kokkoskernels is on understanding workflow (not providing generic version of libraries which are already exist).

from kokkos-kernels.

dholladay00 avatar dholladay00 commented on May 25, 2024

Each team must solve a block tri-diagonal linear system with non-uniform block sizes (the size of block row 1 could be different from the size of block row 2, etc.). Those sizes tend to range from ~10 up to ~1000. While at some point in the problem, each team will have the same sized block-tridiagonal linear system, later on those sizes could be different, so it is probably best to assume that each team is solving a different sized matrix.

I currently use MKL for LU decomposition (dgetrf and dgetrs), but I use a hand-written team level function for dgemm and dgemv. However, I might go back to using mkl for everything as I have been running into issues on machines that have more than 1 thread/core, despite enforcing a team size of 1 (non-deterministic, difficult to reproduce in tests, etc.)

from kokkos-kernels.

kyungjoo-kim avatar kyungjoo-kim commented on May 25, 2024

I see.

  1. there are multiple tridiagonal systems (parallel for can be used)
  2. each tridiagonal sytem is composed of irregular blocks ranging between 10 and 100.
  3. however, those tridiagonal systems have the same length and same internal pattern (which possiblly allows stacking and vectorizing across those tridiagonal systems).

Do you get any performance benefits from your team-level hand-written code compared to MKL ? Since this is dense tridiagonal factorization and solve, do you measure the performance on KNL in terms of gflop/s ? We can go emails for detailed information.

from kokkos-kernels.

mhoemmen avatar mhoemmen commented on May 25, 2024

Recent versions of MKL have batched BLAS for DGEMM at least. You might just be able to call that.

from kokkos-kernels.

dholladay00 avatar dholladay00 commented on May 25, 2024

I vote emails for much of this.

But to answer some questions:

  1. Yes, I am using a parallel for with a team policy.
  2. That is roughly correct, could be > 100 but probably < 1000.
  3. I'll send an email regarding this as its somewhat complicated.

The majority of time is spent calculating the matrix elements, so it is difficult to figure out, but either way the performance differences between MKL and my version are in the noise of total calculation time. This is due to the fact that matrix build = * N * N, matrix solve = * N * N * N. When N is small, the large number is large enough to still take more time.

This project started with the idea of using batched BLAS, but we have since moved away b/c we cannot always rely on each team having the same matrix sizes.

from kokkos-kernels.

kyungjoo-kim avatar kyungjoo-kim commented on May 25, 2024

@mhoemmen Batched BLAS does not make sense in this tridiagonal factorization. Batch operation is to apply BLAS operation for "a set of matrices". Multiple (parallel) tridiagonal factorization can be implemented in a sequence of batch GETRF, TRSM and GEMM. Using batched BLAS, we do not exploit data locality at all even if the sequence of operations completely reuse previous computation result. That is why we need functor-level interface around the parallel for.

We have a compact batch for tridiagonal factorization (LU is implemented without pivot as the tridiagonal factorization is used as preconditioner. do you really need pivoting ?). That is more optimized for problem sizes < 32. For the range of problem sizes between 100 and 1000, I need to repack data (this is not yet implemented).

from kokkos-kernels.

dholladay00 avatar dholladay00 commented on May 25, 2024

While we could get away without pivoting in most cases, it would be preferable to have pivoting. Also, @kyungjoo-kim I sent you an email. @mhoemmen do you wish to be included in the emails?

There were ways to include batching, but it eats up one of our levels of parallelism (each thread team gets a batch of inputs rather than a single set of inputs). When certain physics is enabled, it allows each element of the batch to have a different matrix size and structure, removing the ability to use batched calls.

from kokkos-kernels.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.