Comments (13)
i cannot reproduce this. I'm using gfortran 8.5.0. Which fortran compiler are you using?
from ompi.
mpifort --version
GNU Fortran (GCC) 13.2.0
Copyright (C) 2023 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Oh, that is interesting. Do you really mean, gfortran 8.5.0? Just, because I am somewhat unsure, whether gfortran 8.5 implements the Fortran 2008 constructs type(*)
and dimension(..)
, which are in theory necessary for the mpi_f08
interface. (But I could be wrong about this.)
from ompi.
I can't replicate either
mpifort --version
GNU Fortran (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)
Can you find your OMPI libraries and send back the output of nm libmpi_usempif08.so | grep -i MPI_Status_
from ompi.
unfortunately Open MPI currently does not know about assumed rank fortran option. we're still at assumed shape. hopefully this will be fixed soon.
from ompi.
@bosilca Sure, I obtain following output
nm libmpi_usempif08.so | grep -i MPI_Status_
00000000000233db T mpi_status_f082f_f08_
0000000000023419 T mpi_status_f2f08_f08_
0000000000023457 T mpi_status_set_cancelled_f08_
0000000000023495 T mpi_status_set_elements_f08_
00000000000234d8 T mpi_status_set_elements_x_f08_
U ompi_status_f082f_f
U ompi_status_f2f08_f
U ompi_status_set_elements_f
U ompi_status_set_elements_x_f
0000000000030533 T pmpi_status_f082f_f08_
0000000000030571 T pmpi_status_f2f08_f08_
U pmpi_status_set_cancelled_
00000000000305af T pmpi_status_set_cancelled_f08_
00000000000305ed T pmpi_status_set_elements_f08_
0000000000030630 T pmpi_status_set_elements_x_f08_
But, please note, that the problem is not, that the routine is not part of the library; the problem is, that the routine is not part of the mpi_f08
Fortran module. I get compilation error, not a linking error. If I replace
use mpi_f08, only : ..., MPI_Status_f082f, ...
with
use mpi_f08
the code compiles (but for the wrong reason):
- In the former case, the compiler explicitly checks the
mpi_f08.mod
module file (generated during the build of OpenMPI) whether it contains information about the signature of theMPI_Status_f082f()
routine. And it fails, because it is not found. - In the latter case, the module file is also checked and the signature is not found either, but the compiler is allowed then to assume, that
MPI_Status_f082f()
is an arbitrary external routine with unknown signature. The compiler builds the program then, but no checking of argument number or types is done, you just hope for the best. You can test it yourself: useuse mpi_f08
without theonly
clause and remove one of the arguments in the call toMPI_Status_f08
. The code will still happily compile (and then very likely crash on execution).
from ompi.
another thing to check is the mpi_f08.mod. Its actually just gzipped text in the case of the gnu compiler.
When I gunzip the module file generated by gfortran 8.5.0 i see an entry of this form
4952 'mpi_status_f082f' 'mpi_f08_interfaces' '' 1 ((PROCEDURE
UNKNOWN-INTENT UNKNOWN-PROC UNKNOWN UNKNOWN 0 0 SUBROUTINE GENERIC
ARRAY_OUTER_DEPENDENCY) () (UNKNOWN 0 0 0 0 UNKNOWN ()) 0 0 () () 0 () ()
() 0 0)
from ompi.
@hppritcha That's a good point. I've checked it and
zcat mpi_f08.mod | grep -i mpi_status_f082f
is empty.
Running
for m in *.mod; do echo "*** $m ***"; zcat $m | grep -i mpi_status_f082f; done
returns
*** mpi.mod ***
4135 'mpi_status_f082f' 'mpi' '' 1 ((PROCEDURE UNKNOWN-INTENT
8139 'pmpi_status_f082f' 'mpi' '' 1 ((PROCEDURE UNKNOWN-INTENT
'mpi_status' 0 4134 'mpi_status_f082f' 0 4135 'mpi_status_f2f08' 0 4140
0 8120 'pmpi_start' 0 8130 'pmpi_startall' 0 8134 'pmpi_status_f082f' 0
*** mpi_ext.mod ***
4153 'mpi_status_f082f' 'mpi' '' 1 ((PROCEDURE UNKNOWN-INTENT
8201 'pmpi_status_f082f' 'mpi' '' 1 ((PROCEDURE UNKNOWN-INTENT
'mpi_status' 0 4152 'mpi_status_f082f' 0 4153 'mpi_status_f2f08' 0 4158
0 8182 'pmpi_start' 0 8192 'pmpi_startall' 0 8196 'pmpi_status_f082f' 0
*** mpi_f08.mod ***
*** mpi_f08_callbacks.mod ***
*** mpi_f08_ext.mod ***
*** mpi_f08_interfaces.mod ***
*** mpi_f08_interfaces_callbacks.mod ***
*** mpi_f08_types.mod ***
*** mpi_types.mod ***
*** pmpi_f08_interfaces.mod ***
so it is apparently there in mpi.mod and mpi_ext.mod.
from ompi.
Just one more info, I am using the --enable-mpi1-compatibility
configure option. Could that make a difference?
from ompi.
MPI1 compatibility should not remove this API, it only adds back some deprecated API. However, the inclusion of the status_f08
accessors is driven by OMPI_FORTRAN_HAVE_TYPE_MPI_STATUS
, which is detected during configure by the macro OMPI_FORTRAN_CHECK_BIND_C_TYPE_NAME
. This macro tries to detect if the compiler supports TYPE(...), BIND(C, NAME=...)
, and apparently your compiler fails this test. Can you please check in your config.log for 'if Fortran compiler supports` to see why the test failed.
from ompi.
uh oh. i forgot i was testing on our main branch. there seems to be a problem with 5.0.3. checking if its fixed in the 5.0.x branch. i can reproduce this issue using the gcc 13.2.0 and ompi 5.0.3
from ompi.
this seems to not be fixed on 5.0.x branch
from ompi.
looks like 83eb116 was not cherry picked back to v5.0.x
from ompi.
Related Issues (20)
- Base Allreduce Algorithm Selection/Performance Issue HOT 3
- Issues running OpenMPI 5.0.3 HOT 1
- configure: error: Could not run a simple Fortran program. Aborting. HOT 1
- Use OMPI without LSF integration on LSF HOT 14
- Stuck in third send or recv when connecting two independent mpi applications with MPI_Comm_connect and MPI_Comm_accept with prte HOT 5
- Is UCX working with MPI-Sessions? HOT 4
- libmpi.so is linked with the wrong libopen-pal.so
- Slurm "run" is failing to find "munge" component when run with "container-name" HOT 4
- No output on GPU nodes, not sure where to start. Version 5.0.3 HOT 3
- --hostfile option not working as expected HOT 4
- open mpi issue with orca on Mac M1 pro HOT 1
- Connections to nodes are closed after 30 minutes. HOT 8
- ImportError: libopen-pal.so.80: cannot open shared object file: No such file or directory HOT 1
- when i run mpi
- when i run mpi program using ASAN, asan reports some memory leaks HOT 1
- Error when using MPI_Comm_spawn with ULFM enabled HOT 6
- coll_tuned_dynamic_rules_filename option no way to set alltoall_algorithm_max_requests from the rules file
- coll_tuned_use_dynamic_rules wrong scoping for tools interface
- Fflush(stdout) doesn't work as expected. HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ompi.