Comments (9)
You may already know this, but be aware that SchedMD changed the srun
cmd line in 23.11 - you might need to make some adjustments.
from ompi.
rebuilt slurm and openmpi to use pmi 4.2.9 and dropped the PMIX_MCA_gds=hash setting . Ran ~3000 hello_world jobs in this environment without seeing any core dumps.
from ompi.
Just to be clear: you originally said you ran 3000 jobs with mpirun
using PMIx 5.0.2 and saw no problems. So I'm assuming your last test refers to executing with srun
and not mpirun
- yes??
I fail to see a connection between PMIx and ob1/recv being caught in a segfault - we don't have anything to do with the MPI message exchange. Likewise, it's hard to see what srun
has to do with it, so I have no idea what to suggest. Given everything you have encountered across the two issue reports, I suspect there is something more fundamentally borked in this system.
from ompi.
My apologies - blasted github had me logged into a different account when I wrote the above note. Sigh.
from ompi.
no worries - thanks for taking a look at this for me.
Yep, the new testing using an srun launch with a PMIx 4.2.9 based slurm/openmpi did not see any core dumps in ~3000 runs. I'll stick with this new setup for now since things seem happier.
If you can think of any env variables I can set to provide more debug information, please let me know and I can give them a try and report back what I find.
from ompi.
Gave this some thought - given that things work fine under mpirun
but fail under srun
, I'm inclined to think there is some problem in the Slurm-PMIx integration when using PMIx 5.x. I know nothing about debugging Slurm, so I would really encourage you to file a ticket with SchedMD. At the very least, they should be made aware of the situation in case others encounter it.
It still feels to me like there is something else in your environment causing the problem (and the PMIx change being just a canary or flat out red herring), but minus more info, I have no idea how to pursue it.
from ompi.
one last note to add here before closing this one out and turning my focus to the Slurm/SchedMD side of the house. Two interesting things:
- adding strace in front of the hello_world_mpi application buries/hides/avoids the issue
- removing options using cgroups from the slurm.conf appears buries/hides/avoids the issue
Turns out that I had disabled cgroups in my testing area earlier and forgotten about it. My comments above about PMIx impacting this issue should be ignored. Much more likely the change in my slurm configuration in my test environment that changed the launch behavior.
from ompi.
@bhendersonPlano If this issue is not in OMPI rather SLURM or PMIX can you please file in corresponding community and close here?
from ompi.
I've started a thread on the slurm-users mailing list - hopefully someone will chime in there.
I'll close this one out as it does not appear to be an OpenMPI issue.
from ompi.
Related Issues (20)
- --hostfile option not working as expected HOT 4
- open mpi issue with orca on Mac M1 pro HOT 1
- Connections to nodes are closed after 30 minutes. HOT 8
- ImportError: libopen-pal.so.80: cannot open shared object file: No such file or directory HOT 1
- when i run mpi
- when i run mpi program using ASAN, asan reports some memory leaks HOT 1
- Error when using MPI_Comm_spawn with ULFM enabled HOT 6
- MPI_Status_f082f not part of the mpi_f08 interface HOT 13
- coll_tuned_dynamic_rules_filename option no way to set alltoall_algorithm_max_requests from the rules file
- coll_tuned_use_dynamic_rules wrong scoping for tools interface
- Fflush(stdout) doesn't work as expected. HOT 6
- small array of derived data type(in Fortran) can be sent by MPI_Isend and MPI_Irecv but it ran into errors when I augment the array HOT 4
- DVM environment variable? HOT 3
- Error while building from source openmpi 5.0.3 HOT 1
- Fault tolerant error when re spawn process in mpiexec in remote node
- fortran .mod files installed in libdir instead of includedir HOT 22
- PMIX_ERROR when MPI_Comm_spawn in multiple nodes HOT 11
- coll tuned alltoall algorithm ignored after initialization
- Build fail on Mac M3 with macOS clang 15 HOT 1
- Mystery error on exit HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ompi.