Comments (3)
We could make a gesture for a particular community, but where do we stop ?
I don't share your concerns here, configurations tools (autoconf, cmake) have their ways to identify the compile and link flags needed to pass to any compiler to compile applications. For everything else you can always fallback on mpicc --showme
or mpicxx --showme
from ompi.
I understand the pain of users just trying to compile / link their applications, particularly when trying to mix multiple tools -- such as MPI and CUDA.
However, I'm not sure that MPI needs to be the integration point for all compilation and linking. For example, if Open MPI includes an mpicudacc
wrapper compiler, how will it know what flags to pull from CUDA? More specifically, what if I'm using an old Open MPI and a new CUDA release -- will the old Open MPI know how to full the newest / most recent set of flags from CUDA? More generally: how does Open MPI keep up with these CUDA flags over time? Also, some flags are necessary, but others are configuration-dependent, or user chosen. How should mpicudacc
know what choices to make for all of these?
And to @bosilca's point, how does Open MPI also keep up with ROCM flags that are needed over time? ... etc. Open MPI's release schedule is orthogonal to all the release schedule of other HPC tools; what happens when incompatible changes are made and Open MPI now has stale / incorrect flags for a peer tool? That seems undesirable, and just creates more user confusion and frustration.
Even if you flip the script and make CUDA be the integration point, how would CUDA keep up with the changing set of Open MPI (and MPICH and ...) flags over time?
Rather than everyone having to keep up with everyone else's flags, Open MPI's approach has been to provide multiple mechanisms to extract the flags from our wrapper compilers, and also to allow nesting of wrapper compilers. We -- Open MPI -- can't know exactly what the end user will want from their other tools, or what systems they will want to compile/link against. As such, all we can do is provide both standardized and Open MPI-specific ways to extract what is needed to compile/link against Open MPI.
- Nesting wrapper compilers via environment variables
- The Open MPI wrapper
--showme*
CLI options to extract wrapper compiler flags - Installed
.pc
files to allow use ofpkg-config
to extract wrapper compiler flags
Are these existing mechanisms not sufficient?
Note: I'm not asking if they're trivially easy to use -- I'm asking if they're insufficient to allow correct compiling and linking of Open MPI to other systems.
I understand the compiling / linking large HPC applications can be challenging. But no matter how it is done, some level of expertise is going to be needed by the end user. Perhaps better documentation and/or examples are needed...? If there's something that can be done in Open MPI's docs, for example, I'm open to suggestions (let's do this in Open MPI v5.0.x docs and beyond -- i.e., https://docs.open-mpi.org/ -- there's no much point in doing this for v4.1.x and earlier).
from ompi.
FWIW, that can be achieved locally by the end users.
From the install directory:
- symlink
bin/mpiacc
toopal_wrapper
- copy
share/openmpi/mpicc-wrapper-data.txt
intoshare/openmpi/mpiacc-wrapper-data.txt
- edit
share/openmpi/mpiacc-wrapper-data.txt
and replace the linecompiler=...
withcompiler=nvcc
As @jsquyres pointed, some other adjustments might be required.
from ompi.
Related Issues (20)
- mpirun 5.0.2 hangs - ssh works HOT 11
- --with-cuda failes to find libcuda.so HOT 4
- Scaling issue run openmp on a cluster HOT 4
- openmpi osc_ucx_component error HOT 4
- Error using openmpi mpirun in Fedora 40 HOT 5
- Errors when running mpi programs HOT 5
- Trying to run MPI 3.0.6 on docker HOT 6
- problem with MPI_Comm_Create_Group HOT 8
- Error `Could not find viable pmix build` while building in Docker HOT 2
- COLL/UCC doesn't compile against head of UCC at master HOT 2
- Support zero-copy non-contiguous send HOT 4
- OpenMPI/5.0.3 with PMIx/4.2.7 compilation error HOT 2
- Configure --with-tm=/opt/pbs/ with PBS Professional fails with openmpi-5.0.3, succeeds with openmpi-4.1.4 HOT 9
- Failed to build RPM from SRPM because of large UID and old tar command HOT 3
- dead coll tuned alltoall mca parameters HOT 6
- Discrepancy between 'oshcc' compiler wrapper and corresponding pkg-config files "oshmem-c.pc" and "oshmem.pc"
- Base Allreduce Algorithm Selection/Performance Issue HOT 3
- Issues running OpenMPI 5.0.3 HOT 1
- configure: error: Could not run a simple Fortran program. Aborting. HOT 1
- Use OMPI without LSF integration on LSF HOT 14
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ompi.