Giter Site home page Giter Site logo

sharp's Introduction

Scalable Heterogeneous Architecture for Reproducible Performance

A simple synthetic performance benchmark for heterogeneous and serverless architectures.

Prerequisites

Hardware and software setup instructions found here.

Quick start

After setting up the software and hardware, check if you can run functions correctly on your chosen backend, say, fission:

launchers/launch.py -v -b local sleep 1

This should take about one second and produce some output. If there are no errors, proceed to run a single benchmark:

cd examples
rm -rf reports/misc-local
make backend=local benchmarks="parallel_sleep"
cd ..

This should produce a PDF report file (and various other formats) as reports/misc-local/report.df. Inspect the file and ensure it shows no error messages.

Graphical user interface

Alternatively, you can try out SHARP with a GUI that lets your run measurements, visualize the results, and compare different runs. Setup and run instructions can be found here.

Hardware support

Currently supports the following architectures:

FaaS framework support

Currently supports (using Kubernetes):

  • Fission
  • Knative

Metrics

All benchmarks collect a metric called outer_time that measures how long (in seconds) each run took from the perspective of the launcher, i.e., including both benchmark execution and all setup and overhead time. In addition, any benchmark can have any arbitrary metric logged in the CSV files and reported in the PDF file, as long as it outputs it to stdout. Complete documentation on how to add or customize metrics can be found here.

Functions

  • bounce: Request-bound, time to read request.
  • swapbytes: I/O-bound, cache-oblivious byte-swap.
  • inc: Memory-bound, single-threaded array incrementer.
  • cuda-inc: GPU-parallel version (CUDA) of inc.
  • matmul: CPU-bound, multithreaded matrix multiply.
  • cuda-matmul: GPU-parallel version (CUDA) of matmul.
  • mpi-pingpong-single: Simple MPI Python application entirely executed within a single function.
  • nope: No-op, for latency measurements.
  • sleep: Sleep for a given number of seconds, for cooling down.
  • distributions: A set of synthetic distributions for debugging purposes.

You can find detailed documentation for all these functions here. If you want to add a function/application, please follow these guidelines.

Applications

  • rodinia-omp: The Rodinia HPC benchmarking suite, CPU-based using OpenMP.
  • rodinia-cuda: The Rodinia HPC benchmarking suite, GPU-based using CUDA.

Example workflows / benchmarks

A small library of simple benchmarks and workflows is prebuilt with SHARP:

  • parallel sleep: Evaluate the scaling of the backend as the number of jobs increases.
  • start latency: Measure cold- and hot-start latencies.
  • performance prediction: Measure variability in the input metrics and performance predictions of AUB's benchmark suites.

The last [benchmark]((./docs/exanples/perfpred.md) has a generic reporting mechanism that only visualizes the distributions of all the collected metrics. In can be used as a template for any other benchmark where this visualization is enough (or a good starting point). To adapt it to your needs, simply copy the perfpred directory under examples/ and edit the files to use your own metrics and descriptions.


Directory structure

The code is organized into these subdirectories:

  • fns: Individual functions that can be used and composed in benchmarks (overviewed in the Functions section above).
  • examples: Top-level benchmark Makefile and individual benchmarks using workflows of these functions (overviewed here).
  • launchers: An abstraction layer to launch any function on any backend (described here).
  • workflows: A description of workflow formats and a conversion script from CNCF format to Makefiles (described here.
  • runlogs: Output top-level directory for log files from individual function runs, organized by experiment subdirectories.
  • reports: Output top-level directory for complete benchmark results and analyses, organized by experiment subdirectories.
  • docs: Contains general setup instructions, as well as instructions specific to each backend, function, and benchmark.

sharp's People

Contributors

eitanf-hpe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

mrayhanulmasud

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.