Giter Site home page Giter Site logo

noaa-gfdl / fms Goto Github PK

View Code? Open in Web Editor NEW
87.0 20.0 128.0 67.7 MB

GFDL's Flexible Modeling System

License: Other

Fortran 60.86% Python 0.06% C 2.72% C++ 29.15% Pawn 1.28% Pascal 0.53% Makefile 1.51% M4 0.50% Shell 2.91% CMake 0.37% NASL 0.08% Assembly 0.04%
gfdl climate-model fms climate infrastructure netcdf fortran

fms's Introduction

\page rm General Project Information

Modeling Framework: Flexible Modeling System (FMS)

Today’s climate models simulate highly complex systems. In response to increasingly complex models, the climate community has developed tools and methodologies to facilitate the modeling process and many common tasks (e.g., calendar management, grid generation, I/O). Such frameworks come with a number of advantages, including decreased model development time and increased compatibility of interfaces.

The Flexible Modeling System (FMS) is a software environment that supports the efficient development, construction, execution, and scientific interpretation of atmospheric, oceanic, and climate system models. This framework allows algorithms to be expressed on a variety of high-end computing architectures using common and easy-to-use expressions of the underlying platforms, spanning distributed and shared memory, as well as high-performance architectures. Scientific groups at GFDL can develop new physics and new algorithms concurrently, and coordinate periodically through this framework.

FMS Framework

Modeling frameworks for the construction of coupled models, made from independent model components, are now prevalent across this field. FMS was one of the first frameworks to be developed — since the advent of the Cray T3E in 1998 — and is still in use and under development today, using new architectures and new algorithms.

What is FMS

The Flexible Modeling System (FMS) is a software framework for supporting the efficient development, construction, execution, and scientific interpretation of atmospheric, oceanic, and climate system models. FMS consists of the following:

  1. A software infrastructure for constructing and running atmospheric, oceanic, and climate system models. This infrastructure includes software to handle parallelization, input and output, data exchange between various model grids, orchestration of the time stepping, makefiles, and simple sample run scripts. This infrastructure should largely insulate FMS users from machine-specific details.
  2. A standardization of the interfaces between various component models including software for standardizing, coordinating, and improving diagnostic calculations of FMS-based models, and input data preparation for such models. Common preprocessing and post-processing software are included to the extent that the needed functionality cannot be adequately provided by available third-party software.
  3. Contributed component models that are subjected to a rigorous software quality review and improvement process. The development and initial testing of these component models is largely a scientific question, and would not fall under FMS. The quality review and improvement process includes consideration of (A) compliance with FMS interface and documentation standards to ensure portability and inter-operability, (B) understandability (clarity and consistency of documentation, comments, interfaces, and code), and (C) general computational efficiency without algorithmic changes.
  4. A standardized technique for version control and dissemination of the software and documentation.

FMS does not include the determination of model configurations, parameter settings, or the choice amongst various options. These decisions require scientific research. Similarly, the development of new component models is a scientific concern that is outside of the direct purview of FMS. Nonetheless, infrastructural changes to enable such developments are within the scope of FMS. The collaborative software review process of contributed models is therefore an essential facet of FMS.

Dependencies and installation

The following external libraries are required when building libFMS

  • NetCDF C and Fortran (77/90) headers and libraries
  • Fortran 2003 standard compiler
  • Fortran compiler that supports Cray Pointer
  • MPI C and Fortran headers and libraries (optional)
  • Libyaml header and libraries (optional)
  • Linux or Unix style system

Please see the Build and Installation page for more information on building with each build system.

Compiler Support

For most production environments and large scale regression testing, FMS is currently compiled with the Intel classic compiler (ifort) but will be transitioning to the llvm-based ifx intel compiler when it is available for production.

Below shows the status of our compiler support for various compilers and versions. Testing was done on CentOS 8, with additional testing using a larger cray SLES system. MPICH is used as the MPI library except for the intel compilers, which use intel's mpi library. Compilers used in our Github continuous integration testing are in bold.

Compiler Version Builds Successfully Unit Testing
intel classic(ifort) 2021.6.0 yes passes
GNU (gfortran) 9.3.0 yes passes
intel oneapi (ifx) 2021.6.0 yes passes
GNU (gfortran) 11.2.0 yes passes
HP/Cray (cce) 9.1.1 yes not passing
Nvidia/PGI(nvfortran) 22.9 no not passing
AMD (aocc) 3.2.0 no(compiles,fails to link) not passing

Documentation

Source code documentation for the FMS code base is available at http://noaa-gfdl.github.io/FMS. The documentation is generated by doxygen and updated upon releases, and a copy of the site can be obtained through the gh-pages branch or generated manually with ./configure --enable-docs && make -C docs. For more information on documentating the code with doxygen please see the documentation style guide.

Disclaimer

The United States Department of Commerce (DOC) GitHub project code is provided on an 'as is' basis and the user assumes responsibility for its use. DOC has relinquished control of the information and no longer has responsibility to protect the integrity, confidentiality, or availability of the information. Any claims against the Department of Commerce stemming from the use of its GitHub project will be governed by all applicable Federal law. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by the Department of Commerce. The Department of Commerce seal and logo, or the seal and logo of a DOC bureau, shall not be used in any manner to imply endorsement of any commercial product or activity by DOC or the United States Government.

This project code is made available through GitHub but is managed by NOAA-GFDL at https://gitlab.gfdl.noaa.gov.

fms's People

Contributors

aerorahul avatar bensonr avatar climbfuji avatar cmdupuis3 avatar colingladuenoaa avatar edhartnett avatar edwardhartnett avatar fabienpaulot avatar ganganoaa avatar gbw-gfdl avatar gfdl-eric avatar github-actions[bot] avatar hallberg-noaa avatar j-lentz avatar laurenchilutti avatar lharris4 avatar marshallward avatar mcallic2 avatar menzel-gfdl avatar mjharrison-gfdl avatar mlee03 avatar ngs333 avatar nikizadehgfdl avatar rem1776 avatar slm7826 avatar thomas-robinson avatar underwoo avatar uramirez8707 avatar wrongkindofdoctor avatar zhi-liang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fms's Issues

Trouble building without MPI

Is the code expected to compile without MPI?

When I try a gfortran build, without use libMPI defined, then the build fails for me in the mpp directory like this:

libtool: compile:  gfortran -DPACKAGE_NAME=\"FMS\" -DPACKAGE_TARNAME=\"fms\" -DPACKAGE_VERSION=\"2.0-development\" "-DPACKAGE_STRING=\"FMS 2.0-development\"" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" -DPACKAGE=\"fms\" -DVERSION=\"2.0-development\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -Duse_netCDF=1 -I. -I../include -Iinclude -I/usr/local/netcdf-c-4.6.2/include -I/usr/local/netcdf-fortran-4.4.5_c_4.6.2/include -fcray-pointer -fdefault-double-8 -fdefault-real-8 -Waliasing -ffree-line-length-none -fno-range-check -c mpp.F90  -fPIC -o .libs/mpp.o
../include/fms_platform.h:114:0:



subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
                         1
Error: Procedure ‘mpp_alltoallw_’ at (1) is already defined at (2)
include/mpp_alltoall_nocomm.h:62:25-25:

 subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
                         1
Error: Procedure ‘mpp_alltoallw_’ at (1) is already defined at (2)
include/mpp_alltoall_nocomm.h:62:25-25:

 subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
                         1
Error: Procedure ‘mpp_alltoallw_’ at (1) is already defined at (2)
include/mpp_comm_nocomm.inc:1113:42:

     call mpp_error(NOTE, 'MPP_TYPE_FREE: ' &
                                          1
Error: Syntax error in argument list at (1)
include/mpp_alltoall_nocomm.h:63:30:

                           rbuf, rsize, rdispl, rtype, pelist)
                              1
Error: Array ‘rbuf’ at (1) cannot have a deferred shape
include/mpp_alltoall_nocomm.h:63:45:

                           rbuf, rsize, rdispl, rtype, pelist)
                                             1
Error: Array ‘rdispl’ at (1) cannot have a deferred shape
include/mpp_alltoall_nocomm.h:63:37:

                           rbuf, rsize, rdispl, rtype, pelist)
                                     1
Error: Array ‘rsize’ at (1) cannot have a deferred shape
include/mpp_alltoall_nocomm.h:63:52:

                           rbuf, rsize, rdispl, rtype, pelist)
                                                    1
Error: Array ‘rtype’ at (1) cannot have a deferred shape
include/mpp_alltoall_nocomm.h:62:30:

 subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
                              1
Error: Array ‘sbuf’ at (1) cannot have a deferred shape
include/mpp_alltoall_nocomm.h:62:45:

 subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
                                             1
Error: Array ‘sdispl’ at (1) cannot have a deferred shape
include/mpp_alltoall_nocomm.h:62:37:

 subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
                                     1
Error: Array ‘ssize’ at (1) cannot have a deferred shape
include/mpp_alltoall_nocomm.h:62:52:

 subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
                                                    1
Error: Array ‘stype’ at (1) cannot have a deferred shape
include/mpp_alltoall_nocomm.h:63:30:

                           rbuf, rsize, rdispl, rtype, pelist)
                              1

update affinity code to properly handle cpusets

Default behavior of slurm has impacted the ability of the affinity handling code to properly place MPI-tasks and OpenMP threads. Updates are needed to work within the slurm environment that assigns MPI-tasks to groups of processors using cpusets.

Associated source file will be moved to its own directory within FMS and a function to handle affinity will be added. This will necessitate a change in NOAA-GFDL/FMScoupler (see FMSCoupler issue #7)

suggest versioned releases of the FMS package...

@underwoo , @bensonr, @Zhi-Liang and other FMS code leaders, I suggest that you do a versioned release of this package immediately, and then do them periodically thereafter.

I understand that you also have a larger versioning scheme, which applies to your entire model stack, in which you tag every repo involved to mark the exact software used in a particular model build. This meets the versioning needs of the users of that model, but not of other FMS users (like me).

What I am suggesting is that, in addition to that, you release versions of FMS, so that external users can just install a version of FMS and use it until its next release.

If you do a versioned release, the current master and all branches are all still available for anyone using them in their workflow. The ability to create an over-all tag for all software is still there, and all your existing workflows still work.

But new workflows, in accordance with general practice, can be followed. Users and researchers will have a canonical version number for the FMS package. Different FMS users will be able to understand what version of the software they are using, and be able to manage upgrades properly.

For example, when I add PIO capability to FMS, I will be testing it with various versions of netcdf-c, netcdf-fortran, parallel-netcdf, and HDF5. This is so I can say: "version 2.0 of FMS has been tested with netcdf-c 4.6.2, 4.6.3, etc." Without this information, how do sysadmins know what packages they need to upgrade to support your software?

Providing a well-understood versioned release is part of having a library that others are relying upon. It's necessary so we all can manage our dependencies.

To do a versioned release of FMS, there are a few quick steps that must be followed. It's about a 5-minute process to generate a versioned, tested, portable, and stand-alone tarball for the FMS package.

Please let me know your thinking, and I can guide you through the process the first time. In particular, there are some shared library version-info numbers that must be updated properly, so that shared libraries will work with this code correctly.

FMS does not allow more that 1024 diagnostic files

The diag_manager_nml has a max_files options however once this is over 1024 it has no effect. This is because the maximum number of netcdf files is hard-coded in mpp/include/mpp_io_misc.inc:42 to 1024.

When doing MOM6 diagnostic testing we currently write each diagnostic to a separate file. We have reached this limit.

If we write many diags to the same file then we exceed MAX_FIELDS_PER_FILE which is hard-coded to 300 at diag_data.F90:78

compilers that upper-case mod file names?

15 years ago, when I put together the netcdf-fortran build system, there were some fortran compilers that would upper-case mod file names.

In other words, where gfortran produces fms_mod.mod, some compilers would yield FMS_MOD.mod.

This was annoying but there is a way to handle it in the build system. However I would like to avoid it if I can.

I don't know if any modern fortran compilers still do this. If you try to build and see an error from this, let me know and I will put the goop into the autotools system that will handle it.

Request: add check that '.res.' is already present in restart file name to fms2_io

@menzel-gfdl
MOM6 already includes the .res. in the base restart file name. Since fms2_io::open_file routine calls a routine that automatically appends .res. to the file name when is_restart = .true., this means that MOM_restart has to incorporate several additional restart name string checks and additional variables to hold two versions of the restart file name: one with .res. and one without it.

It would be simpler if:
a) fms2_io::open_file requires the user to specify .res. in restart file names, or
b) fms2_io::restart_file_path_mangle subroutine also checks for whether .res. is already present in the file name passed to the routine if is_restart = .true., and only appends .res. if it is not.

mkmf build failing in various ways...

I am trying to get the mkmf build working. I have read all the documentation I could find, as well as the wiki, but no luck

Here's one of the problems:

mpicc -DNDEBUG  -D__IFC -I/usr/local/netcdf-c-4.6.2_mpich-3.2/include -O2    -c -I./mosaic -I/usr/local/netcdf-c-4.6.2/include	./mosaic/read_mosaic.c

./mosaic/read_mosaic.c: In function ‘handle_netcdf_error’:
./mosaic/read_mosaic.c:38:43: warning: implicit declaration of function ‘nc_strerror’; did you mean ‘strerror’? [-Wimplicit-function-declaration]
   sprintf( errmsg, "%s: %s", msg, (char *)nc_strerror(status) );
                                           ^~~~~~~~~~~
                                           strerror
./mosaic/read_mosaic.c:38:35: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
   sprintf( errmsg, "%s: %s", msg, (char *)nc_strerror(status) );
                                   ^
./mosaic/read_mosaic.c: In function ‘get_var_data’:
./mosaic/read_mosaic.c:232:3: error: unknown type name ‘nc_type’
   nc_type vartype;
   ^~~~~~~
./mosaic/read_mosaic.c: In function ‘get_var_data_region’:
./mosaic/read_mosaic.c:292:3: error: unknown type name ‘nc_type’
   nc_type vartype;
   ^~~~~~~
Makefile:151: recipe for target 'read_mosaic.o' failed
make: *** [read_mosaic.o] Error 1
make: *** Waiting for unfinished jobs...

Seems to not be finding netcdf.h, though the include directories are provided on the command line...

mkmf build having trouble with MPI functions...

I am getting this error with my mkmf build:

./mpp/include/mpp_alltoall_nocomm.h:62:25-25:

subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
1
Error: Procedure ‘mpp_alltoallw_’ at (1) is already defined at (2)
./mpp/include/mpp_alltoall_nocomm.h:62:25-25:

subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
1

Error: Procedure ‘mpp_alltoallw_’ at (1) is already defined at (2)
./mpp/include/mpp_alltoall_nocomm.h:62:25-25:

 subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
                         1
Error: Procedure ‘mpp_alltoallw_’ at (1) is already defined at (2)
./mpp/include/mpp_comm_nocomm.inc:1113:42:

     call mpp_error(NOTE, 'MPP_TYPE_FREE: ' &
                                          1

Any idea what might be causing this?

diag_manager: Regional subsetting for u-point variables does not seem to be using correct grid

The diag_manager may not be using the correct domain when trying to find the array indices for the regional subsetting. This problem can be demonstrated using the Pacific_undercurrent umo diagnostic in the OM4_025 test case. Essentially what seems to be happening is that the vector of longitudes at the corner (q) points (which are ostensibly placeholders and not representative of the tripolar grid) are incorrectly being used to find the point closest to the requested longitudinal point. Instead the diagnostic manager should use the full 2D lat/lon arrays to try to find the correct indices for the requested subregion.

A notebook demonstrating the case where this was found in more detail can be found https://gist.github.com/ashao/77a8cdc20eb18dd4adb7f388e816bd6a. To use for yourself, any ocean_static.nc file from an OM4_025 run can be used, other model outputs can be found in a run that @nikizadehgfdl did:

/lustre/f2/scratch//oar.gfdl.ogrp-account/work/nnz/xanadu_mom6_devgfdl_20190412/OM4p25_IAF_BLING_csf_rerun_ShaoDiags_1x1m0d_1756x1o.o268462119/output.stager/lustre/f2/scratch/oar.gfdl.ogrp-account/nnz/xanadu_mom6_devgfdl_20190412/OM4p25_IAF_BLING_csf_rerun_ShaoDiags/ncrc4.intel16-prod/archive/1x1m0d_1756x1o/history/17080101.nc/

error building oda_core_ecda.F90, can't find xbt_adjust

There's one module I can't build, oda_core_ecda.F90.

Where is xbt_adjust?

For now I just have this module commented out of the autotools build.

/bin/bash ../libtool  --tag=FC   --mode=compile mpifort -DPACKAGE_NAME=\"FMS\" -DPACKAGE_TARNAME=\"fms\" -DPACKAGE_VERSION=\"2.0-development\" -DPACKAGE_STRING=\"FMS\ 2.0-development\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" -DPACKAGE=\"fms\" -DVERSION=\"2.0-development\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -Duse_netCDF=1 -Duse_libMPI=1 -I.  -I../include -I../fms -I../mpp -I../time_manager -I../axis_utils -I../horiz_interp -I../constants -I/usr/local/netcdf-c-4.6.2_mpich-3.2/include -I/usr/local/netcdf-fortran-4.4.5_c_4.6.2_mpich-3.2/include  -fcray-pointer -fdefault-double-8 -fdefault-real-8 -Waliasing -ffree-line-length-none -fno-range-check -c -o oda_core_ecda.lo oda_core_ecda.F90
libtool: compile:  mpifort -DPACKAGE_NAME=\"FMS\" -DPACKAGE_TARNAME=\"fms\" -DPACKAGE_VERSION=\"2.0-development\" "-DPACKAGE_STRING=\"FMS 2.0-development\"" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" -DPACKAGE=\"fms\" -DVERSION=\"2.0-development\" -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -Duse_netCDF=1 -Duse_libMPI=1 -I. -I../include -I../fms -I../mpp -I../time_manager -I../axis_utils -I../horiz_interp -I../constants -I/usr/local/netcdf-c-4.6.2_mpich-3.2/include -I/usr/local/netcdf-fortran-4.4.5_c_4.6.2_mpich-3.2/include -fcray-pointer -fdefault-double-8 -fdefault-real-8 -Waliasing -ffree-line-length-none -fno-range-check -c oda_core_ecda.F90  -fPIC -o .libs/oda_core_ecda.o
oda_core_ecda.F90:49:6:

   use xbt_adjust, only : xbt_drop_rate_adjust
      1
Fatal Error: Can't open module file ‘xbt_adjust.mod’ for reading at (1): No such file or directory
compilation terminated.
Makefile:415: recipe for target 'oda_core_ecda.lo' failed
make[1]: *** [oda_core_ecda.lo] Error 1
make[1]: Leaving directory '/home/ed/tmp/f3/oda_tools'
Makefile:399: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1

What should be installed?

With your mkmf build system, there does not seem to be a "make install" target. In the autotools system, of course there is. ;-)

Right now "make install" installs only the libFMS library (static and shared). It is installed in the lib directory under whatever was provided to the configure script prefix argument. For example:

./configure --prefix=/home/ed/test_fms && make install

will build the FMS library and install it in /home/ed/test_fms/lib, creating directories as needed.

With netcdf-fortran and some other fortran packages I work with, we also install the .mod files. With netcdf-fortran we also install a netcdf.inc file, and some man pages.

So the question is, do we know of anything we want to install other than libFMS?

adding support for PIO to FMS for better I/O performance

PIO (https://github.com/NCAR/ParallelIO) is a HPC I/O library that has been used in CESM at NCAR, and has recently become an output option for WRF.

PIO allows the same code to use netCDF classic, netCDF-4 parallel, netCDF-4 sequential/compressed, or pnetcdf for input/output, on a file-by-file basis. It also allows designation of an arbitrary number of I/O processors to handle all I/O. It will also allow use of new technologies being added to the netCDF C library, like zarr, and to the HDF5 library, including some other forms of cloud storage. Applications like FMS will not have to change their code to use these new features.

PIO users can easily switch from netcdf classic to pnetcdf to HDF5, to zarr - this could be a run-time decision, or changed at compile-time by changing the mode flag in nf_create/nf_open. Furthermore, PIO allows codes to scaled to thousands of processors or more, while still providing the performance of parallel I/O from a much more reasonable number of I/O processors.

Right now I am finishing a PIO-netCDF integration project. PIO will become available to existing netCDF code (C or Fortran), via the use of a mode flag in nc_create/nc_open. (Some code changes are required to set up the I/O system, define data decomposition, and do reads/writes of the distributed arrays.) The netCDF-PIO integration will be available for the next released versions of netcdf-c and PIO.

Using this integrated code, I can convert the FMS code without changing most netCDF calls. I see that you already have a data decomposition scheme and I will have to map this on to PIO's data decomposition functions.

In order to execute these changes, of course I need the mpp directory to be fully tested, so I will add tests to cover untested code, until we can all be confident that no changes will break existing code.

My plan is to submit these changes as a pull request to FMS. I hope that you will consider it for merging to master, once I have demonstrated its value.

I hope that PIO will allow users to easily try out a variety of different strategies, and use what works best in each case.

Travis-CI trusty openMPI missing mpi_comm_create_group

The openMPI library used in the trusty travis-ci image does not have mpi_comm_create_group, which was added to libFMS in the latest release. The .travis.yml file needs to be updated to either use mpich2, or to a newer (xenial) image.

Loosen F2000 restrictions

Hello all. I currently maintain the FMS instance inside of the GEOS model here at Goddard. While we have some plans to perhaps just use your Github as a submodule, for now it's sort of walled off and merged into our base.

But, until that point, I did have a request from my work with FMS. It would be to let all compilers be F2000 compliant. At present include/fms_platform.h only allows Intel as such:

#if defined(__INTEL_COMPILER)
#define _F95
#define _F2000
#endif

when in truth, GCC and PGI are quite easily so and NAG should be as well (I haven't gotten NAG to build FMS yet because it is fundamentally impossible at the moment; I've emailed Raymond Menzel about that). So, if anything, the ALLOCATABLE component version of FMS should be default and the pointer one the option.

In my local copy, I've just removed the test, and I could easily do a pull request for that off a fork, but I didn't know if you had a better thought on how to handle it.

problematic regional diagnostics for certain ocean layout

I used MOM6 tagged "om4/v1.0.1", and dumped regional oceanic diagnostics in Carribean Windward Passage. The variables: "vmo" (Ocean Mass Y Transport) looked unreasonable when I used certain ocean layout, for example,
ocn ranks="4671" threads="1" layout = "90,72" io_layout = "1,4" mask_table="mask_table.1809.90x72".

The untarred history file is
gfdl:/archive/oar.gfdl.cmip6/CM4/warsaw_201710_om4_v1.0.1/CM4_piControl_C/gfdl.ncrc4-intel16-prod-openmp/history/tmp_650/06500101.ocean_Windward_Passage.nc

The stdout is gfdl:/archive/oar.gfdl.cmip6/CM4/warsaw_201710_om4_v1.0.1/CM4_piControl_C/gfdl.ncrc4-intel16-prod-openmp/ascii/06500101.ascii_out.tar

Could you please take a look? Please let me know if any other info. is needed.

Thanks.

Adding ensemble number to diag_integral.out

A user requested adding the ensemble number to the end of the diag_integral.out filename to distinguish between ensembles.

As background, the reason I'm interested in this is that I've used this
file to figure out the radiative imbalance of the model in the past with
single member runs. Now that I have ensembles to play with I'd like to
be able to do the same thing for each ensemble member and see the spread.
I still get a single diag_integral.out file from my runs, but I have no
idea which member the data belongs to or whether the data gets written
by the first, last, or "last to get to the routine" member. I don't even
know if different timesteps are from the same member or an average of
the entire ensemble.

affinity.c has only one possible return value

Looking at the mpp/affinity.c code, the get_cpu_affinity function has two return lines.

  if (last_cpu != -1) {return (first_cpu);}
  return (last_cpu == -1) ? first_cpu : -1;

However, when looking at all the possible ways the two if statements can be evaluated, there is only one true return value: first_cpu. This should be reduced to only one return statement.

Further evaluation shows the function sets a default value for first_cpu = -1, and this should be safe.

new autotools build does not work on theia...

For fun I packaged up an FMS dist tarball and tried to build it in theia.

Here's how I tried to build:
CC=mpiicc FC=mpiifort CPPFLAGS='-I/home/Edward.Hartnett/local/netcdf-4.6.3-development_mpiicc/include -I/home/Edward.Hartnett/local/netcdf-fortran-4.4.5_c_4.6.3-development_mpiifort/include' LDFLAGS='-L/home/Edward.Hartnett/local/netcdf-4.6.3-development_mpiicc/lib -L/home/Edward.Hartnett/local/netcdf-fortran-4.4.5_c_4.6.3-development_mpiifort/lib' ./configure

Here's how it failed:

mpiifort -DPACKAGE_NAME=\"FMS\" -DPACKAGE_TARNAME=\"fms\" -DPACKAGE_VERSION=\"2.0-development\" -DPACKAGE_STRING=\"FMS\\
 2.0-development\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" -DPACKAGE=\"fms\" -DVERSION=\"2.0-development\" -DSTDC_H\
EADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_\
H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -DHAVE_DLFCN_H=1 -DLT_OBJDIR=\".libs/\" -Duse_netCDF=1 -Dus\
e_libMPI=1 -I.  -I../include -I../mpp/include -I/home/Edward.Hartnett/local/netcdf-4.6.3-development_mpiicc/include -I/\
home/Edward.Hartnett/local/netcdf-fortran-4.4.5_c_4.6.3-development_mpiifort/include  -g -c -o mpp.o mpp.F90
fms_platform.h(114): #warning: macro redefined: QUAD_KIND
mpp_comm_mpi.inc(1386): #warning: macro redefined: MPP_TYPE_
mpp_comm_mpi.inc(1387): #warning: macro redefined: MPI_TYPE_
mpp_comm_mpi.inc(1390): #warning: macro redefined: MPP_TYPE_CREATE_
mpp_comm_mpi.inc(1391): #warning: macro redefined: MPP_TYPE_
mpp_comm_mpi.inc(1392): #warning: macro redefined: MPI_TYPE_
mpp_comm_mpi.inc(1395): #warning: macro redefined: MPP_TYPE_CREATE_
mpp_comm_mpi.inc(1396): #warning: macro redefined: MPP_TYPE_
mpp_comm_mpi.inc(1397): #warning: macro redefined: MPI_TYPE_
mpp_comm_mpi.inc(1400): #warning: macro redefined: MPP_TYPE_CREATE_
mpp_comm_mpi.inc(1401): #warning: macro redefined: MPP_TYPE_
mpp_comm_mpi.inc(1402): #warning: macro redefined: MPI_TYPE_
mpp_comm_mpi.inc(1405): #warning: macro redefined: MPP_TYPE_CREATE_
mpp_comm_mpi.inc(1406): #warning: macro redefined: MPP_TYPE_
mpp_comm_mpi.inc(1407): #warning: macro redefined: MPI_TYPE_
mpp_comm_mpi.inc(1410): #warning: macro redefined: MPP_TYPE_CREATE_
mpp_comm_mpi.inc(1411): #warning: macro redefined: MPP_TYPE_
mpp_comm_mpi.inc(1412): #warning: macro redefined: MPI_TYPE_
Using 8-byte addressing
Using pure routines.
Using allocatable derived type array members.
Using cray pointers.
../mpp/include/mpp_comm_mpi.inc(258): error #6285: There is no matching specific subroutine for this generic subroutine\
 call.   [MPP_MIN]
        tmin = t; call mpp_min(tmin)
-----------------------^
../mpp/include/mpp_comm_mpi.inc(259): error #6285: There is no matching specific subroutine for this generic subroutine\
 call.   [MPP_MAX]
        tmax = t; call mpp_max(tmax)
-----------------------^
../mpp/include/mpp_comm_mpi.inc(260): error #6285: There is no matching specific subroutine for this generic subroutine\
 call.   [MPP_SUM]
        tavg = t; call mpp_sum(tavg); tavg = tavg/mpp_npes()
-----------------------^

mkmf build error: cannot have deferred shape...

I am getting lots of these errors with the mkmf build I am trying:

.```
/mpp/include/mpp_alltoall_nocomm.h:62:25-25:

subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
1
Error: Procedure ‘mpp_alltoallw_’ at (1) is already defined at (2)
./mpp/include/mpp_alltoall_nocomm.h:62:25-25:

subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
1
Error: Procedure ‘mpp_alltoallw_’ at (1) is already defined at (2)
./mpp/include/mpp_alltoall_nocomm.h:62:25-25:

subroutine MPP_ALLTOALLW_(sbuf, ssize, sdispl, stype, &
1
Error: Procedure ‘mpp_alltoallw_’ at (1) is already defined at (2)
./mpp/include/mpp_comm_nocomm.inc:1113:42:

 call mpp_error(NOTE, 'MPP_TYPE_FREE: ' &
                                      1

Anyone know what is causing these?

write_record() sets IO stack size even when not needed

The mpp_io_stack is used by write_record() as scratch space when writing a diagnostic with non-standard data packing. However the mpp_io_stack is increased to be the same size as the data even when no packing is used.

This results in large unnecessary memory usage. For example I recently had a case where the mpp_io_stack was being increased to 200 Mb even though it was not needed. In some cases this can cause a process to run out of memory and exit.

I suggest that we only increase the IO stack when it is needed.

Ensemble manager interface is difficult to use

The return value from (FMS ensemble interface) get_ensemble_size() is not complete until ensemble_pelist_setup() has been called. It will return bad values. A check for this would be good as it has caused problems in the past.

configure should detect lack of mpif.h

Currently, if I forget to set FC to mpifort, then the build system tries to use gfortran, and fails in drifters_comm.F90, because it can't find mpif.h.

This should be detected by configure, so the user gets a nice error message instead of a rude compile failure.

Remove dev/master branch

master is up-to-date with dev/master via #103 , so dev/master is ready to be deleted. The branch is protected, and the repository owner needs to remove it.

is netcdf required for FMS build?

We have the preprocessor macro use_netCDF, which we are always setting.

Does FMS build without netCDF? Or is netcdf always required?

add sample Fortran test to the autotools build

I will put in a Fortran test in the autotools builds. The tests will simply return 0 (i.e. pass). But that's where you can insert some test code for your library.

To do this I will add a test directory, which will be compiled after libFMS, which is the directory where the combined libFMS library is built. The library must be built before the test is run, so the tests must be in their own directories. This also makes it easy to organize tests. (For example in the netcdf-c build we have test directories nctest, nc_test, nc_test4, dap_test, hdf4_test, etc.)

I will add a directory test_fms, and put a simple test in there. This test will be run by "make check" so it's a starting point for your continuous integration.

questions about openmp build

How does building with openmp work?

Do we just provide the -openmp flag to the Fortran compiler?

Are there any pre-processor macros that need to be set?

Eliminate remaining uses of Cray pointers

We (NASA GMAO) would like to build GEOS using the NAG Fortran compiler to perform various aggressive compile-time and runtime checks provided by that compiler. Unfortunately, NAG does not support Cray pointers, so a workaround is needed for FMS.

Now that there is a firm integration path, I will happy to attempt the necessary changes and then issue a pull-request.

domain_read: answer changes in land with new io

I am getting answer changes when running with the new io in land because the river restart is not read correctly. I think it is because code is setting every rank as the root pe and is using the same starting and ending index in the read, so they are all reading the same part of the file.

In L322-324 of fms_netcdf_domain_io.F90, when opening the file the code gets the pelist from the io_domain

  pelist_size = mpp_get_domain_npes(io_domain)
  allocate(pelist(pelist_size))
  call mpp_get_pelist(io_domain, pelist)

The pelist ends up been just a one element array with the current pe

Then, in L433-442 of netcdf_io.F90, the code assigns the root pe as the first element in the pelist to the fileobj

In domain_read.inc, in L133-140 (for a 2D read), it gets the starting index and length of each pe list

    call mpp_get_compute_domains(io_domain, xbegin=pe_isc, xsize=pe_icsize, position=xpos)
    call mpp_get_compute_domains(io_domain, ybegin=pe_jsc, ysize=pe_jcsize, position=ypos)
    do i = 1, size(fileobj%pelist)
      c(xdim_index) = pe_isc(i) - pe_isc(1) + 1
      c(ydim_index) = pe_jsc(i) - pe_jsc(1) + 1
      e(xdim_index) = pe_icsize(i)
      e(ydim_index) = pe_jcsize(i)

c and e end up being the same for all pe, so I get the same data.

This is my work dir /lustre/f2/scratch/gfdl/Uriel.Ramirez/work/esm2mb-luh2/esm2mb-luh2_1x0m0d72h_64x1a.o201466261

In this case, there is only 1 tile.

How to run FMS tests?

I see in various directories what seems to be test code. For example, in mpp there is a program test_mpp.F90.

When I build this and run it I don't get good results. Am I doing something wrong? How do you build this? How do you run the tests?

The mkmf system seems to lump these test files into the build of the library. Is that expected?

QUAD_KIND from fms_platform.h redefined

fms_platform.h is included a lot by users of FMS. Due to some ifdefs, it generates a warning every time it is used. This can be easily fixed.

/usr/local/fms/include/fms_platform.h:114:0:

 #define QUAD_KIND DOUBLE_KIND
 
Warning: "QUAD_KIND" redefined
/usr/local/fms/include/fms_platform.h:39:0:

 #define QUAD_KIND 16
 
note: this is the location of the previous definition

Error linking MOM6 and FMS libraries when building on Mac with gnu compilers

I am about 90% of the way to compiling the FMS and MOM6 libraries with gcc 8.2 in debug mode on my personal machine running MacOS Mojave (v 10.14.2). Following the steps on the MOM6 wiki for compiling the ocean-only configuration here, I am able to compile FMS and MOM6, but get the following error during the link stage:

Undefined symbols for architecture x86_64:
  "_mpi_abort_", referenced from:
      ___mpp_mod_MOD_mpp_error_basic in mpp.o
  "_mpi_allreduce_", referenced from:
      ___mpp_mod_MOD_mpp_sum_int4 in mpp.o
      ___mpp_mod_MOD_mpp_sum_int8 in mpp.o
      ___mpp_mod_MOD_mpp_sum_real8 in mpp.o
      ___mpp_mod_MOD_mpp_min_int4_1d in mpp.o
      ___mpp_mod_MOD_mpp_min_int4_0d in mpp.o
      ___mpp_mod_MOD_mpp_min_int8_1d in mpp.o
      ___mpp_mod_MOD_mpp_min_int8_0d in mpp.o
      ...
  "_mpi_alltoall_", referenced from:
      ___mpp_mod_MOD_mpp_alltoall_logical8 in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_logical4 in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_real8 in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_real4 in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_int8 in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_int4 in mpp.o
  "_mpi_alltoallv_", referenced from:
      ___mpp_mod_MOD_mpp_alltoall_logical8_v in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_logical4_v in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_real8_v in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_real4_v in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_int8_v in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_int4_v in mpp.o
  "_mpi_alltoallw_", referenced from:
      ___mpp_mod_MOD_mpp_alltoall_logical8_w in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_logical4_w in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_real8_w in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_real4_w in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_int8_w in mpp.o
      ___mpp_mod_MOD_mpp_alltoall_int4_w in mpp.o
  "_mpi_barrier_", referenced from:
      ___mpp_mod_MOD_mpp_sync in mpp.o
  "_mpi_bcast_", referenced from:
      ___mpp_mod_MOD_mpp_broadcast_logical4 in mpp.o
      ___mpp_mod_MOD_mpp_broadcast_logical8 in mpp.o
      ___mpp_mod_MOD_mpp_broadcast_int4 in mpp.o
      ___mpp_mod_MOD_mpp_broadcast_int8 in mpp.o
      ___mpp_mod_MOD_mpp_broadcast_real4 in mpp.o
      ___mpp_mod_MOD_mpp_broadcast_real8 in mpp.o
      ___mpp_mod_MOD_mpp_broadcast_char in mpp.o
      ...
  "_mpi_comm_create_group_", referenced from:
      ___mpp_mod_MOD_get_peset in mpp.o
  "_mpi_comm_group_", referenced from:
      ___mpp_mod_MOD_mpp_init in mpp.o
  "_mpi_comm_rank_", referenced from:
      ___mpp_mod_MOD_mpp_init in mpp.o
  "_mpi_comm_size_", referenced from:
      ___mpp_mod_MOD_mpp_init in mpp.o
  "_mpi_finalize_", referenced from:
      ___mpp_mod_MOD_mpp_exit in mpp.o
  "_mpi_get_count_", referenced from:
      ___mpp_mod_MOD_mpp_transmit_logical4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_logical8 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_int4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_int8 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_real4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_real8 in mpp.o
      ___mpp_mod_MOD_mpp_sync_self in mpp.o
      ...
  "_mpi_group_incl_", referenced from:
      ___mpp_mod_MOD_get_peset in mpp.o
  "_mpi_init_", referenced from:
      ___mpp_mod_MOD_mpp_init in mpp.o
  "_mpi_initialized_", referenced from:
      ___mpp_mod_MOD_mpp_init in mpp.o
  "_mpi_irecv_", referenced from:
      ___mpp_mod_MOD_mpp_transmit_logical4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_logical8 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_int4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_int8 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_real4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_real8 in mpp.o
  "_mpi_isend_", referenced from:
      ___mpp_mod_MOD_mpp_transmit_logical4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_logical8 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_int4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_int8 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_real4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_real8 in mpp.o
  "_mpi_recv_", referenced from:
      ___mpp_mod_MOD_mpp_transmit_logical4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_logical8 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_int4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_int8 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_real4 in mpp.o
      ___mpp_mod_MOD_mpp_transmit_real8 in mpp.o
  "_mpi_type_commit_", referenced from:
      ___mpp_mod_MOD_mpp_type_create_logical8 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_logical4 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_real8 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_real4 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_int8 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_int4 in mpp.o
  "_mpi_type_create_subarray_", referenced from:
      ___mpp_mod_MOD_mpp_type_create_logical8 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_logical4 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_real8 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_real4 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_int8 in mpp.o
      ___mpp_mod_MOD_mpp_type_create_int4 in mpp.o
  "_mpi_type_free_", referenced from:
      ___mpp_mod_MOD_mpp_type_free in mpp.o
  "_mpi_wait_", referenced from:
      ___mpp_mod_MOD_mpp_sync_self in mpp.o
  "_mpi_wtick_", referenced from:
      ___mpp_mod_MOD_system_clock_mpi in mpp.o
  "_mpi_wtime_", referenced from:
      ___mpp_mod_MOD_system_clock_mpi in mpp.o
  "_sched_setaffinity", referenced from:
      _set_cpu_affinity in affinity.o
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
make: *** [MOM6] Error 1

Note that I had to replace linux-specific macros with MAC equivalents in mpp/affinity.c. Please see the file in my fork for additional details here.

I used the following script to configure the environment and build FMS+MOM6 :

#!/usr/bin.sh
# Set environment variables
export PKG_CONFIG_PATH="/opt/local/lib/pkgconfig"
export LD_LIBRARY_PATH="/usr/local/mpich-install/lib/"
buildDir="~/gfdl_model"
execDir="${buildDir}/gnu8_debug/exec"
makeTemplate="WKoD_gnu8_2.mk"
mkdir -p ${execDir}
cd ${execDir}
# create fms directory if it does not exist
if [ ! -d "fms" ]; then
  mkdir "fms"
fi
cd "fms"
# remove the path_names file if it exists
pathFileList="path_names"
if [ -f "${pathFileList}" ]; then
  rm -f ${pathFileList}
fi
#create new list_paths file for shared files
${buildDir}/mkmf/bin/list_paths -l ${buildDir}/MOM6_unmodified/src/shared
# create the make file for fms
${buildDir}/mkmf/bin/mkmf -t ${buildDir}/mkmf/templates/${makeTemplate} -o '-I/usr/local/include -I/usr/local/mpich-install/include' -p libfms.a -c '-Duse_libMPI -Duse_netCDF -DSPMD' path_names
# run Make
make NETCDF=3 DEBUG=1 libfms.a -j 4 2>&1 | tee make.out
# Build MOM6
cd ${execDir}
# create fms directory if it does not exist
if [ ! -d "mom6" ]; then
  mkdir "mom6"
fi
cd "mom6"
if [ -f "${pathFileList}" ]; then
  rm -f ${pathFileList}
fi
#create new list_paths file for MOM6 solo_ocean configuration and the source code
${buildDir}/mkmf/bin/list_paths -l ${buildDir}/MOM6_unmodified/src/MOM6/{config_src/dynamic,config_src/solo_driver,src/{*,*/*}} ${buildDir}/MOM6_unmodified/src/shared/{mpp,mpp/include,include}
# create the makefile for MOM6
${buildDir}/mkmf/bin/mkmf -t ${buildDir}/mkmf/templates/${makeTemplate} -o '-I../fms -I/usr/local/mpich-install/include' -p MOM6 -l '-L../fms -lfms' -c '-Duse_libMPI -Duse_netCDF -DSPMD' path_names      
# Run make
make NETCDF=3 DEBUG=1 MOM6 -j 4  2>&1 | tee make.out

Other details:

  • gcc/gfortran: MacPorts gcc8 8.2.0_3
  • MPICH (installed from source): v 3.3
  • netcdf-c (installed from source via https://github.com/Unidata/netcdf-c ): netCDF 4.6.3-development
  • netcdf-fortran (installed from source via https://github.com/Unidata/netcdf-fortran): netCDF-Fortran 4.5.0-development
  • HDF-5 (installed from source): v 1.10.4
  • Makefile template (excluding the Tmpfiles macro):
# Template for the GNU Compiler Collection on Linux systems
#
# Typical use with mkmf
# mkmf -t WKoD_gnu8.mk -c"-Duse_libMPI -Duse_netCDF" path_names /usr/local/include

############
# Commands Macros
FC = gfortran
CC = gcc
CXX = g++
LD = gfortran $(MAIN_PROGRAM)

#######################
# Build target macros
#
# Macros that modify compiler flags used in the build.  Target
# macrose are usually set on the call to make:
#
#    make REPRO=on NETCDF=3
#
# Most target macros are activated when their value is non-blank.
# Some have a single value that is checked.  Others will use the
# value of the macro in the compile command.

DEBUG =              # If non-blank, perform a debug build (Cannot be
                     # mixed with REPRO or TEST)

REPRO =              # If non-blank, erform a build that guarentees
                     # reprodicuibilty from run to run.  Cannot be used
                     # with DEBUG or TEST

TEST  =              # If non-blank, use the compiler options defined in
                     # the FFLAGS_TEST and CFLAGS_TEST macros.  Cannot be
                     # use with REPRO or DEBUG

VERBOSE =            # If non-blank, add additional verbosity compiler
                     # options

OPENMP =             # If non-blank, compile with openmp enabled

NO_OVERRIDE_LIMITS = # If non-blank, do not use the -qoverride-limits
                     # compiler option.  Default behavior is to compile
                     # with -qoverride-limits.

NETCDF =             # If value is '3' and CPPDEFS contains
                     # '-Duse_netCDF', then the additional cpp macro
                     # '-Duse_LARGEFILE' is added to the CPPDEFS macro.

INCLUDES =           # A list of -I Include directories to be added to the
                     # the compile command.

SSE =                # The SSE options to be used to compile.  If blank,
                     # than use the default SSE settings for the host.
                     # Current default is to use SSE2.

COVERAGE =           # Add the code coverage compile options.

# Need to use at least GNU Make version 3.81
need := 3.81
ok := $(filter $(need),$(firstword $(sort $(MAKE_VERSION) $(need))))
ifneq ($(need),$(ok))
$(error Need at least make version $(need).  Load module gmake/3.81)
endif

# REPRO, DEBUG and TEST need to be mutually exclusive of each other.
# Make sure the user hasn't supplied two at the same time
ifdef REPRO
ifneq ($(DEBUG),)
$(error Options REPRO and DEBUG cannot be used together)
else ifneq ($(TEST),)
$(error Options REPRO and TEST cannot be used together)
endif
else ifdef DEBUG
ifneq ($(TEST),)
$(error Options DEBUG and TEST cannot be used together)
endif
endif

# Get number of CPUs 
#MAKEFLAGS += --jobs=$(shell grep '^processor' /proc/cpuinfo | wc -l)
MAKEFLAGS += --jobs=$(shell sysctl -n hw.ncpu)
# Macro for Fortran preprocessor
FPPFLAGS := $(INCLUDES)
# Fortran Compiler flags for the NetCDF library
FPPFLAGS += $(shell nf-config --fflags)
# Fortran Compiler flags for the MPICH MPI library

#FPPFLAGS += $(shell pkg-config --cflags-only-I mpich2)
FPPFLAGS += $(shell pkg-config --cflags-only-I mpich)

# Base set of Fortran compiler flags
FFLAGS := -fcray-pointer -fdefault-double-8 -fdefault-real-8 -Waliasing -ffree-line-length-none -fno-range-check

# Flags based on perforance target (production (OPT), reproduction (REPRO), or debug (DEBUG)
FFLAGS_OPT = -O3
FFLAGS_REPRO = -O2 -fbounds-check
FFLAGS_DEBUG = -O0 -g -W -fbounds-check -fbacktrace -ffpe-trap=invalid,zero,overflow

# Flags to add additional build options
FFLAGS_OPENMP = -fopenmp
FFLAGS_VERBOSE =
FFLAGS_COVERAGE =

# Macro for C preprocessor
CPPFLAGS = $(INCLUDES)
# C Compiler flags for the NetCDF library
CPPFLAGS += $(shell nc-config --cflags)
# C Compiler flags for the MPICH MPI library
# Output the effective list of Cflags from the -I group for all the requested packages
# there's also an 'openpa.pc' package in the same directory, so this flag must ignore it
#CPPFLAGS += $(shell pkg-config --cflags-only-I mpich2)
CPPFLAGS += $(shell pkg-config --cflags-only-I mpich)

# Base set of C compiler flags
CFLAGS := -D__IFC

# Flags based on perforance target (production (OPT), reproduction (REPRO), or debug (DEBUG)
CFLAGS_OPT = -O2
CFLAGS_REPRO = -O2
CFLAGS_DEBUG = -O0 -g

# Flags to add additional build options
CFLAGS_OPENMP = -fopenmp
CFLAGS_VERBOSE =
CFLAGS_COVERAGE =

# Optional Testing compile flags.  Mutually exclusive from DEBUG, REPRO, and OPT
# *_TEST will match the production if no new option(s) is(are) to be tested.
FFLAGS_TEST = $(FFLAGS_OPT)
CFLAGS_TEST = $(CFLAGS_OPT)

# Linking flags
LDFLAGS := -L/usr/local/mpich-install/lib -lmpich 
LDFLAGS_OPENMP := -fopenmp
LDFLAGS_VERBOSE :=
LDFLAGS_COVERAGE :=

# Start with a blank LIBS
LIBS =
# NetCDF library flags
LIBS += $(shell nf-config --flibs)
# MPICH MPI library flags
#LIBS += $(shell pkg-config --libs mpich2-f90)
LIBS += $(shell pkg-config --libs mpich)

# Get compile flags based on target macros.
ifdef REPRO
CFLAGS += $(CFLAGS_REPRO)
FFLAGS += $(FFLAGS_REPRO)
else ifdef DEBUG
CFLAGS += $(CFLAGS_DEBUG)
FFLAGS += $(FFLAGS_DEBUG)
else ifdef TEST
CFLAGS += $(CFLAGS_TEST)
FFLAGS += $(FFLAGS_TEST)
else
CFLAGS += $(CFLAGS_OPT)
FFLAGS += $(FFLAGS_OPT)
endif

ifdef OPENMP
CFLAGS += $(CFLAGS_OPENMP)
FFLAGS += $(FFLAGS_OPENMP)
LDFLAGS += $(LDFLAGS_OPENMP)
endif

ifdef SSE
CFLAGS += $(SSE)
FFLAGS += $(SSE)
endif

ifdef NO_OVERRIDE_LIMITS
FFLAGS += $(FFLAGS_OVERRIDE_LIMITS)
endif

ifdef VERBOSE
CFLAGS += $(CFLAGS_VERBOSE)
FFLAGS += $(FFLAGS_VERBOSE)
LDFLAGS += $(LDFLAGS_VERBOSE)
endif

ifeq ($(NETCDF),3)
  # add the use_LARGEFILE cppdef
  ifneq ($(findstring -Duse_netCDF,$(CPPDEFS)),)
    CPPDEFS += -Duse_LARGEFILE
  endif
endif

ifdef COVERAGE
ifdef BUILDROOT
PROF_DIR=-prof-dir=$(BUILDROOT)
endif
CFLAGS += $(CFLAGS_COVERAGE) $(PROF_DIR)
FFLAGS += $(FFLAGS_COVERAGE) $(PROF_DIR)
LDFLAGS += $(LDFLAGS_COVERAGE) $(PROF_DIR)
endif

LDFLAGS += $(LIBS)

can we run some of the tests in the mpp directory?

My work is taking me around to FMS again! ;-) I will need to do some work on the IO code for NOAA to try using PIO.

Before I start, I would love to get some automatic tests for the code in the mpp directory. I notice there are a lot of fortran test codes in the directory. Can we turn these on as tests?

monin_obukhov_kernel.F90 names its mod file monin_obukhov_inter.mod

Here's a note that I just put into monin_obukhov/Makefile.am:

# Note that the name of the mod is different from the name of the F90
# code for monin_obukhov_kernel.F90. Also note that the mod file for
# this one does not have "_mod" in the name.

So this module is not named after its code, and also does not have the "_mod" in the name of the mod file (which I think is fine, but does not match the rest of the code base).

Dummy argument for CT_data_override_* should be intent(inout)

This involves code on head of branch: coupler_type_reform_rwh (@Hallberg-NOAA)

Calls to data_override() from CT_data_override_2d() and CT_data_override_3d() are updating values inside a dummy argument (var) that has intent(in).

mpiifort on NCEP's wcoss platform reports:

coupler_types.F90(3740): error #6780: A dummy argument with the INTENT(IN) attribute shall not be defined nor become undefined.   [VAR]
    call data_override(gridname, var%bc(n)%field(m)%name, var%bc(n)%field(m)%values, Time)

Solution is the dummy argument var should have intent(inout).

Relevent code:

FMS/coupler/coupler_types.F90

Lines 3731 to 3757 in 3d8f68c

!> This subroutine potentially overrides the values in a coupler_2d_bc_type
subroutine CT_data_override_2d(gridname, var, Time)
character(len=3), intent(in) :: gridname !< 3-character long model grid ID
type(coupler_2d_bc_type), intent(in) :: var !< BC_type structure to override
type(time_type), intent(in) :: time !< The current model time
integer :: m, n
do n = 1, var%num_bcs ; do m = 1, var%bc(n)%num_fields
call data_override(gridname, var%bc(n)%field(m)%name, var%bc(n)%field(m)%values, Time)
enddo ; enddo
end subroutine CT_data_override_2d
!> This subroutine potentially overrides the values in a coupler_3d_bc_type
subroutine CT_data_override_3d(gridname, var, Time)
character(len=3), intent(in) :: gridname !< 3-character long model grid ID
type(coupler_3d_bc_type), intent(in) :: var !< BC_type structure to override
type(time_type), intent(in) :: time !< The current model time
integer :: m, n
do n = 1, var%num_bcs ; do m = 1, var%bc(n)%num_fields
call data_override(gridname, var%bc(n)%field(m)%name, var%bc(n)%field(m)%values, Time)
enddo ; enddo
end subroutine CT_data_override_3d

Issue first reported by @jiandewang

some latitudes are zero when section is south of equator

In

https://github.com/NOAA-GFDL/MOM6-examples/blob/dev/gfdl/ice_ocean_SIS2/OM4_025/diag_table.MOM6

we have two sections (below) that sample MOM fields completely south of the equator. A handful of the southern latitudes have 0 for the latitude values, rather than -71 or the like. So the latitudes are not monotonic and that messes with the plotting, yielding spurious latitude values altogether for Ferret.

We (Adrcroft and Griffies) think there is a bug in diag manager.

Here are the sections

"ocean_model_z", "volcello", "volcello", "ocean_Drake_Passage", "all", "mean", "-70. -70. -71. -54.5 -1 -1",2

"ocean_model_z", "volcello", "volcello", "ocean_Agulhas_section", "all", "mean", "20. 20. -71.0 -34 -1 -1",2

An example output is

/archive/ogrp/CMIP6/OMIP/warsaw_201803_mom6_2018.04.06/OM4p25_IAF_fc0/gfdl.ncrc3-intel16-prod/pp/ocean_Drake_Passage/ts/monthly/20yr/ocean_Drake_Passage.198801-200712.umo.nc

parallel builds not actually working...

I had told you that make -j would work on this new build system, but that's not quite true yet, though I hope it will be soon.

Until further notice, don't use make -j, it will not work well.

The issue is that some of the mod files depend on other mod files being built first. I have to record these dependencies in the makefiles before parallel builds will work.

Axes attribute initialisation

Axes%num_attributes is not initialised. Since it can take any value this can cause problems such as array bounds overruns.

`mpp_sum` default length for vectors

This might be more of a feature request or a question, rather than an issue, but we noticed that when calling mpp_sum for a vector, the length argument is required. Something like this:

integer :: x
integer :: T(5,5,5)

call mpp_sum(x)
call mpp_sum(T, size(T))

But it seems that length is almost always going to equal the size of the first argument. If that's correct (which is my question), would it be possible to have length default to the variable length when unset?

Also, I wasn't able to think of a good case where one would actually want this length to be a different value. Maybe it is actually worth omitting the argument altogether?

Warning about argument aliasing in fms_io.F90

It is only a warning but nevertheless I think it is bad practice to alias arguments with conflicting intents. The compiler could legitimately do exactly the wrong thing and overwrite file_out before reading file_in.

../../../../../MOM6-examples/src/FMS/fms/fms_io.F90:7385.33:

       call get_mosaic_tile_file(actual_file, actual_file, is_no_domain, domain, tile_count)
                                 1
Warning: Same actual argument associated with INTENT(IN) argument 'file_in' and INTENT(OUT) argument 'file_out' at (1)

diag_manager_nml::do_diag_field_log fails

When setting do_diag_field_log == .true., the model will print a log or lines:

Module|Field|Long Name|Units|Number of Axis|Time Axis|Missing Value|Min Value|Max Value|AXES LIST

and will then fail with the error:

FATAL from PE 486: diag_axis_mod::get_diag_axis_name: Illegal value for axis_id used (value 0).

how the build should handle fortran flags (FCFLAGS not FFLAGS)

Programmers of Fortran packages usually assume that the compiler flags should be set by the build system, but this is a peculiarity of Fortran programmers. Most build systems leave the setting of flags entirely to the user, and this is the wisest course.

In many cases, flags are necessary because programmers are not writing portable Fortran, and this is the case with FMS as well. For example, FMS requires the -ffree-line-length-none is required because some of the code has lines longer than 132 characters without a continuation line. This could and should easily be avoided in code. The first step to a robust build system is that all programmers must write their code to be portable.

So we should not need to set flags, or, if we do, we should deal with the minimum possible number of them by coding for portability.

However, the current FMS build system does do a lot of flag setting, and I know that FMS users would miss this feature, so I have a set of changes ready to add to my branch which lets configure set Fortran flags (but also provides the capability to turn setting of flags off).

What would be nice would be to see a move away from dependence on fortran flags to get code to compile.

Replace `#include <mpif.h>` with `use mpi` on default platforms

Some hiccups on our system's wrapper scripts revealed that FMS models are still doing #include <mpif.h> to import the MPI symbols, even though the standard recommends the module (use mpi).

Currently only the sgi_mipspro flag enables use mpi. The default is to use the CPP #include statement.


To be annoyingly pedantic (sorry!), here are some quotes from the MPI 3.1 standard.

p605:

INCLUDE ’mpif.h’: This method is described in Section 17.1.4. The use of the include file mpif.h is strongly discouraged starting with MPI-3.0, because this method neither guarantees compile-time argument checking nor provides sufficient techniques to solve the optimization problems with nonblocking calls, and is therefore inconsistent with the Fortran standard. It exists only for backwards compatibility with legacy MPI applications.

p611:

The use of the mpif.h include file is strongly discouraged and may be deprecated in a future
version of MPI.

The comments above refer to the Fortran include statement (include 'mpif.h'); the standard doesn't even consider using the CPP include, which is probably even more strongly discouraged.


Rather than send a patch, I figured that you guys might be more aware of any odd platforms that still require this and would be in a better position to make the change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.