Giter Site home page Giter Site logo

kky-fury / mpiwasm Goto Github PK

View Code? Open in Web Editor NEW
13.0 2.0 2.0 206.88 MB

MPIWasm is a WebAssembly Embedder based on Wasmer that enables the high-performance execution of MPI applications compiled to Wasm. (ACM PPoPP'23)

License: Apache License 2.0

Dockerfile 11.96% C 10.41% Python 11.26% Shell 21.09% Rust 43.58% CMake 0.72% C++ 0.98%
mpi wasmer webassembly

mpiwasm's Introduction

MPIWasm

MPIWasm is a WebAssembly Runtime (embedder) based on Wasmer that enables the high-performance execution of MPI applications compiled to Wasm.

You can find more details about MPIWasm in our ACM SIGPLAN PPoPP'23 Paper.

Getting Started:

Running our webassembly embedder (MPIWasm) for executing MPI applications compiled to WebAssembly (Wasm).

What is MPIWasm?

MPIWasm is an embedder for MPI based HPC applications based on Wasmer. It enables the high performance execution of MPI-based HPC applications compiled to Wasm. It servers two purposes:

  1. Delivering close to native application performance, i.e., when applications are executed directly on a host machine without using Wasm.
  2. Enabling the distribution of MPI-based HPC applications as Wasm binaries.

Requirements

  • Docker
  • Currently the docker image is only built for the linux/amd64 platform. For building images for other platforms, please see here.

Steps:

sudo docker run -it kkyfury/ppoppae:v2 /bin/bash

#Executing the HPCG benchmark compiled to Wasm inside the docker container
mpirun --allow-run-as-root -np 4 ./target/release/embedder examples/xhpcg.wasm 
#Wait 2~3 mins and you can see the execution

What should the output look like?

MPIWasm should succcessfully execute the HPCG benchmark which has been compiled to Wasm. On successful exeuction, you should see something similar to this.

#Executing the IntelMPI benchmarks compiled to Wasm inside the docker container, redirecting nonaffecting errors to a file
mpirun --allow-run-as-root -np 4 ./target/release/embedder examples/imb.wasm 2>error
#Wait 2~3 mins and you can see the execution

What should the output look like?

MPIWasm should succcessfully execute the IntelMPI benchmarks which have been compiled to Wasm. On successful exeuction, you should see something similar to this.

Note:

The number of processes for execution (-np) can be increased or decreased. However, depending on your machine you might need to provide -oversubscribe flag to mpirun.

Running Experiments with MPIWasm

This section describes how to run experiments with our embedder to obtain plots simlar to the ones in our paper.

Running small-scale experiments inside the docker container.

To run small-scale experiments inside the docker container, we provide an end-to-end script. This script:

  1. Executes the HPCG, IS, and IntelMPI benchmarks for their native execution and when they are executed using MPIWasm after compilation to Wasm.
  2. Parses the obtained results and generates the relevant plots.

Script execution:

sudo docker run -it kkyfury/ppoppae:v2 /bin/bash
cd run_experiments
./runme.sh

The script can take around 10-15 minutes to finish execution. After completion, you can see the generated data in the run_experiments/experiment_data folder. The generated plots can be found in the run_experiments/Plots folder. For copying the plots to your local filesystem, please use the docker-cp command.

Running large-scale experiments on an HPC system.

This section describes running MPIWasm for executing MPI applications on multiple nodes of an HPC system. For this, a user needs to do the following:

  1. Build a version of the embedder for your HPC system depending on the particular architecture, operating system, and the MPI library on the system (reference). MPIWasm currently supports the OpenMPI library.
  2. For building MPIWasm for different linux distributions and OpenMPI versions please see here. It is important to ensure that the MPI library version on the HPC system matches the one with which the embedder is built with. For building MPIWasm for different architectures, please see here.
  3. Execute the MPI applications using the built embedder on the HPC system. This can be done via submitting jobs to a RJMS software on an HPC system such as SLURM. We provide sample job scripts for our HPC system, i.e., SuperMUC-NG that uses SLURM here.
  4. After executing the applications, the user can use the different Parsers to parse the benchmark data. Following this, the results can be visualized using out Plotting helper script.

Step by Step Instructions:

Compiling C/C++ applications to Wasm (Section 3.2)

We have setup a docker container with the required dependencies for compiling different MPI applications conformant to the MPI-2.2 standard to Wasm. We have also included the HPCG benchmark, the Intel MPI benchmarks, and the IS benchmark as examples.

Steps:

sudo docker run -it kkyfury/wasitoolchain:v1 /bin/bash

#Compiling HPCG
cd /work/example/hpcg-benchmark
./wasi-cmake.sh
cd cmake-build-wasi
make 
# Following this, you can see the generated wasm binary which can be executed with our embedder.

#Compiling IntelMPI Benchmarks
cd /work/example/intel-mpi-benchmarks
./wasi-cmake.sh
cd cmake-build-wasi
make 
# Ignore the warnings during compilation, you can see the generated wasm binaries in cmake-build-wasi/src_cpp directory

#Compiling IS Benchmark
cd NPB3.4.2/NPB3.4-MPI/
make IS CLASS=C
#Following this, you can see the generated wasm binary in the bin folder.

For compiling different MPI applications to Wasm, please refer to this Readme file. All the different applications compiled to Wasm that we used in our paper are present here (Section 4). More details about them can be found here.

Compiling C/C++ applications natively

We provide Dockerfiles for natively compiling the different benchmarks used in our experiments. All pre-compiled native binaries can be found here.

Embedder (MPIWasm) (Section 3)

Building from source

For detailed instructions, please look at this Readme file.

Note

We recommend users to use our provided docker-compose file for building the embedder from source for different distributions. This prevents the unnecessary installation of software on a user's local system.

Sample Usage for Ubuntu:20.04

cd wasi-mpi-rs
docker-compose run ubuntu-20-04
cargo build --release
#After the build process, you can see the built embedder in the /target/release/ folder.

Support for multiple operating systems.

We provide support for building our embedder for the following distributions:

  1. centos-8-2
  2. opensuse-15-1
  3. ubuntu-20-04
  4. macos-monterey based on Docker-OSX. Note, that the generated embedder might not be directly compatible for darwin distributions.

For all of these distributions, the embedder can be built by using the provided docker-compose file.

Sample usage for centos-8.2

cd wasi-mpi-rs
docker compose run centos-8-2
cargo build --release
#After the build process, you can see the built embedder in the /target/release/ folder.

Following this, the user can copy the embedder for usage of their HPC system using the docker-cp command.

Example:

docker cp <container-id>:/s/target/release/embedder <destination-path-user-filesystem>

The base images used for the different operating systems can be found here. These example Dockerfiles can be easily extended to support other different linux distributions. We provide pre-built versions of our embedder for the different distributions here. For specific OpenMPI versions, please see the individual Dockerfiles.

Note:

The path for the generated embedder for macos-monterey is /home/arch/s/.


Usage

For detailed instructions, please look at this Readme file.


Modifying the embedder

For modifying our embedder, we recommend using our provided docker-compose file for any of the supported operating systems. This docker-compose file mounts the volume with the embedder's source code inside the container. As a result, any changes to it's source code will be reflected inside it.

All modifications to the embedder need to be done here. Following this, the embedder needs to be recompiled.

Sample workflow

cd wasi-mpi-rs
docker-compose run ubuntu-20-04
#Make any relevant change to the embedder's source code inside the /wasi-mpi-rs/src/ directory. These changes are automatically reflected inside the container. 
cargo build --release

After the build process, the new embedder can be copied to the user's local filesystem using the docker-cp command.


Support for arm64

Our embedder also supports execution on linux/arm64 platforms. We provide pre-built versions of our embedder for arm64 for the different linux distributions here. For specific, OpenMPI versions, please see the Dockerfiles.


Building images for arm64

Requirements:

If you are building the docker image on an x86_64 system then you require docker-buidlx. Note that, building the image might take around 12 hours.

If you are using an arm64 machine then follow the normal instructions.

Example for building the embedder for ubuntu:20.04 for arm64 on an x86_64 machine. Please follow the steps below:

sudo docker buildx create --name mybuilder --use --bootstrap
cd wasi-mpi-rs/.gitlab/ci/images/
sudo docker buildx build --push -f ubuntu-20-04.Dockerfile --platform linux/arm64 -t kkyfury/ubuntumodifiedbase:v1 .
cd ../../../
sudo docker buildx build --push -f Dockerfile --platform linux/arm64 -t kkyfury/embedderarm:v1 .

Please change the docker image tags according to your docker registry account, i.e., replace kkyfury with your registry username. Following this, the image name in the FROM keyword in the Dockerfile needs to changed accordingly.

Citation

If you use MPIWasm in your work, please cite our paper:

@inproceedings{mpiwasm,
    author = {Chadha, Mohak and Krueger, Nils and John, Jophin and Jindal, Anshul and Gerndt, Michael and Benedict, Shajulin},
    title = {Exploring the Use of WebAssembly in HPC},
    year = {2023},
    isbn = {9798400700156},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3572848.3577436},
    doi = {10.1145/3572848.3577436},
    pages = {92โ€“106},
    numpages = {15},
    keywords = {wasmer, wasm, WebAssembly, MPI, HPC},
    location = {Montreal, QC, Canada},
    series = {PPoPP '23}
}

mpiwasm's People

Contributors

kky-fury avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mpiwasm's Issues

PPoPP Sheparding Todos

  • Experimental data on a large scale (100s of nodes)
  • Clear description of the caching mechanism
  • Comparison with statically linked binary
  • Evaluation of MPI data types
  • Clarity on memory safety

PPoPP AE

  • Write script to do small-scale experiments inside the docker container for IMB, HPCG, and IS
  • Write documentation about how to do large scale experiements
  • Write script to generate csvs and plots
  • Building embedder from source

Translation overhead

Command for compiling PingPong

/opt/wasi-sdk/12/bin/clang --sysroot=/opt/wasi-sdk/12/share/wasi-sysroot pingpong.c -I/tmp/installer/include/ -O3 -Wl,--allow-undefined,--export=malloc,--export=free -o pingpong.wasm

Binary Sizes

Application Native Size (KB) Static Size (MB)
InteMPI Benchmarks 568 27
HPCG 164 26
IOR 364 16
IS 36 15
DT 40 15

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.