Giter Site home page Giter Site logo

escomp / cism Goto Github PK

View Code? Open in Web Editor NEW
6.0 6.0 9.0 97.17 MB

Community Ice Sheet Model

License: GNU Lesser General Public License v3.0

CMake 0.36% Shell 0.66% Fortran 82.66% C++ 1.73% Roff 0.21% C 5.52% Python 8.04% Makefile 0.06% MATLAB 0.62% HTML 0.06% Pascal 0.08%

cism's Introduction

The Earth System Community Modeling Portal (ESCOMP)

This repository provides information about the ESCOMP github organization.

Management

The ESCOMP github organization is managed by the University Corporation for Atmospheric Research.

Policies for inclusion of a repository in this organization

The ESCOMP github organization is for large, community-oriented earth system modeling projects of broad interest.

Projects stored here should meet all of the following criteria:

  • They are related to earth system modeling in some way

  • They are supported (e.g., if someone files an issue, it will be seen and addressed in some way)

  • They are projects that have some involvement (typically strong involvement) from outside communities beyond the sponsoring organization (e.g., the university community or other agencies). Projects without outside collaboration should be stored elsewhere.

It is acceptable to store a fork in ESCOMP if this is the main development fork for inclusion in a community-oriented modeling project.

  • For example, the primary repository for the MOM6 ocean model is stored in a GFDL github organization. However, NCAR maintains a fork of MOM6 for use in CESM. It is acceptable to store NCAR's fork of MOM6 in ESCOMP so that it can appear alongside other CESM components, and because NCAR's MOM6 fork is still of broad interest to the community even though it isn't the primary fork.

  • On the other hand, it is not acceptable to store a fork in ESCOMP if this fork is only used internally at NCAR and is not part of a broader community modeling project.

Repository naming conventions

For repositories with an acronym as part or all of their name, we generally prefer the use of uppercase acronyms. For example, we have repositories named CESM and CISM-wrapper. An exception is repositories that are used solely for hosting GitHub pages (i.e., websites appearing under escomp.github.io/): These can use lowercase acronyms in order to have fully lowercase URLs. For example, we have a repository named ctsm-docs. Repositories that are used for both code and GitHub pages follow the uppercase convention (e.g., CESM); we can put redirects in place so that links to the lowercase version redirect to the uppercase version (e.g., escomp.github.io/cesm redirects to escomp.github.io/CESM).

We have no standardization concerning the use of hyphens, underscores or PascalCase to delineate separate words in a repository's name.

cism's People

Contributors

agsalin avatar billsacks avatar danfmartin avatar doug-ranken avatar eibarker avatar ekluzek avatar ggslc avatar gunterl avatar hgoelzer avatar icrutt avatar ikalash avatar jedwards4b avatar jhkennedy avatar katetc avatar kevans32 avatar matthewhoffman avatar mhagdorn avatar mrnorman avatar sarahshannon avatar stephenprice avatar whlipscomb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cism's Issues

Glad's is_in_active_grid is slightly inconsistent with logic elsewhere in CISM

This is a spin-off from #39 . I noticed that the conditional that determines the icemask in is_in_active_grid checks usrf > 0. But apparently some code elsewhere in CISM treats points with topg exactly equal to 0 as land. So, for consistency, is_in_active_grid should also consider these points with topg exactly equal to 0 to be land, and thus within the ice mask.

As discussed in #39 , this inconsistency is a problem for conservation, so it should definitely be fixed. However, it only impacts a small number of grid cells. As also discussed in #39 , changing this conditional to usrf >= 0 doesn't work, because ocean grid cells have usrf == 0, so that change led all points in the CISM domain to be considered to be within the ice mask. @whlipscomb suggested a possible change like usrf > 0 .or. topg >= 0, but we want to check to make sure that really is the correct conditional, consistent with what is used elsewhere. Ideally, we would use an existing mask variable that is consistent with what is used elsewhere.

libglissade IO issues

l've been trying to clean up cesm IO issues and found that libglissade does not use the normal
cesm logging mechanism and instead does print * for log messages.
print * should be replaced with write(stdout, *) throughout this library.

with evolve_ice turned off, beta_internal differs on restart

From @billsacks on August 9, 2016 18:21

This is a low priority issue, but @whlipscomb suggests that we track it so that we can look into it at some point.

In my testing of the most recent code within CESM, I discovered that, when I turn ice evolution off (evolve_ice = .false.), restart runs fail due to differences in the beta_internal field when running with glissade. (They also fail due to differences in uflx and vflx with glide.)

Specifically, I observed this with test ERS_D_Ly3.f09_g16_gl4.TGIS2.yellowstone_pgi.cism-noevolve from r79699 on branch https://svn-ccsm-models.cgd.ucar.edu/glc/branches/develop_update_2016_07, which points to the code base in PR #58 . I also observed this in the n06 branch tag on that branch, which did calving_init even when ice evolution was off. (i.e., this problem occurs whether or not you do calving in initialization.)

Copied from original issue: E3SM-Project/cism-piscees#59

Come up with a way to shorten long lines that reference __FILE__

From @billsacks on September 23, 2017 12:18

The pgi compiler enforces a line length of (I think) 264 characters after doing macro expansion. This can cause problems for lines using the __FILE__ macro, because this expands to the absolute path to that file. This was causing problems in CESM for some automated tests that had long paths to the bld directory. I have fixed that problem with a cime change, but this too-long-line problem is likely to come back to bite us at some point.

Some possible solutions:

  1. Do what CLM does: In each file, have:

      character(len=*), parameter, private :: sourcefile = &
           __FILE__

    then use sourcefile rather than __FILE__ in lines of code. This will still fail if the absolute path to the file is longer than about 256 characters, but it at least buys us some characters (because you're not adding the file path length to the source file line in which it's referenced).

  2. If the problem just occurs for source files that appear in the bld directory (because paths to the bld directory may be longer than paths to the source tree), then we could fix this just for files that are copied or generated in the bld directory. For example, for auto-generated io files, we could change references to __FILE__ to instead just give the file name without a path.

  3. We could consider a solution that uses just the file name itself rather than the full path, for all files. Some possibilities:

    a. Hard-code at the top of each file something like:

      character(len=*), parameter, private :: sourcefile = "myfilename.F90"

    b. Apparently PIO created its own _FILE_ macro in the past, which gave just the file name rather than its full path, though I don't understand exactly how it was done. (At a glance, it looks like PIO now uses solution (a).)

    c. There may be a way to do this via cmake. For example, see the cmake-based solution here: https://stackoverflow.com/questions/8487986/file-macro-shows-full-path. However, comments around that make it sound like a fragile solution. I've seen some other cmake-based solutions via googling, but they all seem either complex or fragile.

Copied from original issue: E3SM-Project/cism-piscees#64

Linux (Ubuntu) build problems.

From @jhkennedy on April 23, 2015 22:47

CISM (dev and public) builds (with errors) on my Ubuntu systems. However, some of the test cases in the CISM dev branch do not run (detailed below).

Tested on both Ubuntu 14.04LTS and 12.04LTS -- with fresh installs

Build output:

[  1%] Building Fortran object CMakeFiles/glimmercismfortran.dir/libglimmer/parallel_mpi.F90.o
...
[ 40%] Building Fortran object CMakeFiles/glimmercismfortran.dir/libglint/glint_mbal.F90.o

/home/fjk/Documents/Code/cism-dev/libglint/glint_mbal.F90:37:0: warning: extra tokens at end of #ifdef directive [enabled by default]
 #ifdef USE_ENMABAL  ! This option is *not* suppported
 ^

[ 41%] Building Fortran object CMakeFiles/glimmercismfortran.dir/libglint/glint_mbal_coupling.F90.o
...
[ 78%] Building Fortran object CMakeFiles/glimmercismfortran.dir/libglimmer-solve/SLAP/dgmres.f.o

/home/fjk/Documents/Code/cism-dev/libglimmer-solve/SLAP/dgmres.f:2620.59:

     $           DZ, SX, JSCAL, JPRE, MSOLVE, NMSL, RWORK, IWORK,       
                                                           1
Warning: Type mismatch in argument 'ipar' at (1); passed REAL(8) to INTEGER(4)
/home/fjk/Documents/Code/cism-dev/libglimmer-solve/SLAP/dgmres.f:1881.36:

      IF (ISDGMR(N, B, X, XL, NELT, IA, JA, A, ISYM, MSOLVE,            
                                    1
Warning: Type mismatch in argument 'ia' at (1); passed INTEGER(4) to REAL(8)
/home/fjk/Documents/Code/cism-dev/libglimmer-solve/SLAP/dgmres.f:1976.38:

        IF (ISDGMR(N, B, X, XL, NELT, IA, JA, A, ISYM, MSOLVE,          
                                      1
Warning: Type mismatch in argument 'ia' at (1); passed INTEGER(4) to REAL(8)

[ 79%] Building Fortran object CMakeFiles/glimmercismfortran.dir/libglimmer-solve/SLAP/dcg.f.o
...
[ 86%] Building Fortran object CMakeFiles/glimmercismfortran.dir/libglimmer-solve/SLAP/xersla.f.o

/home/fjk/Documents/Code/cism-dev/libglimmer-solve/SLAP/xersla.f:245.21:

         call xerabt('xerror -- invalid input',23)                      
                     1
Warning: Type mismatch in argument 'messg' at (1); passed CHARACTER(1) to INTEGER(4)
/home/fjk/Documents/Code/cism-dev/libglimmer-solve/SLAP/xersla.f:325.18:

      call xerabt(messg,lmessg)                                         
                  1
Warning: Type mismatch in argument 'messg' at (1); passed CHARACTER(1) to INTEGER(4)

[ 86%] Building C object CMakeFiles/glimmercismfortran.dir/libglimmer/writestats.c.o
[ 87%] Building Fortran object CMakeFiles/glimmercismfortran.dir/fortran_autogen_srcs/glimmer_vers.F90.o
Linking Fortran static library lib/libglimmercismfortran.a
[ 92%] Built target glimmercismfortran
[ 93%] Building CXX object libglimmer-trilinos/CMakeFiles/glimmercismcpp.dir/trilinosNoxSolver.cpp.o
[ 94%] Building CXX object libglimmer-trilinos/CMakeFiles/glimmercismcpp.dir/trilinosGlissadeSolver.cpp.o
[ 95%] Building CXX object libglimmer-trilinos/CMakeFiles/glimmercismcpp.dir/trilinosModelEvaluator.cpp.o
Linking CXX static library ../lib/libglimmercismcpp.a
[ 96%] Built target glimmercismcpp
Scanning dependencies of target cism_driver
[ 97%] Building Fortran object cism_driver/CMakeFiles/cism_driver.dir/cism_external_dycore_interface.F90.o
Warning: Nonexistent include directory "/home/fjk/Documents/Code/cism-dev/builds/linux-gnu/build/include"
[ 98%] Building Fortran object cism_driver/CMakeFiles/cism_driver.dir/cism_front_end.F90.o

Warning: Nonexistent include directory "/home/fjk/Documents/Code/cism-dev/builds/linux-gnu/build/include"

[ 99%] Building Fortran object cism_driver/CMakeFiles/cism_driver.dir/gcm_to_cism_glint.F90.o

Warning: Nonexistent include directory "/home/fjk/Documents/Code/cism-dev/builds/linux-gnu/build/include"

[100%] Building Fortran object cism_driver/CMakeFiles/cism_driver.dir/gcm_cism_interface.F90.o

Warning: Nonexistent include directory "/home/fjk/Documents/Code/cism-dev/builds/linux-gnu/build/include"

[100%] Building Fortran object cism_driver/CMakeFiles/cism_driver.dir/cism_driver.F90.o

Warning: Nonexistent include directory "/home/fjk/Documents/Code/cism-dev/builds/linux-gnu/build/include"

Linking CXX executable cism_driver
[100%] Built target cism_driver

For a parallel build these tests work (run as serial and parallel [where applicable]):

  • Halfar
  • glint-example
  • higher-order (all except slab)

And these don't (errors detailed below):

  • EISMINT-1 (all)
  • EISMINT-2 (all)
  • higher-order/slab

Typical EISMINT-1 and EISMINT-2 output:

 $ ./cism_driver e1-fm.1.config 
 CISM dycore type (0=Glide, 1=Glam, 2=Glissade, 3=AlbanyFelix, 4 = BISICLES) =            0
 g2c%which_gcm (1 = data, 2 = minimal) =            0
 call cism_init_dycore
 Setting halo values: nhalo =           0
 WARNING: parallel dycores tested only with nhalo = 2
 Layout(EW,NS) =           31          31  total procs =            1
Global idiag, jdiag:          1     1
Local idiag, jdiag, task:     1     1     0
*** Error in `cism_driver': free(): invalid pointer: 0x00000000014cc8b0 ***

Program received signal SIGABRT: Process abort signal.

Backtrace for this error:
#0  0x7F4BE47A27D7
#1  0x7F4BE47A2DDE
#2  0x7F4BE3DF0D3F
#3  0x7F4BE3DF0CC9
#4  0x7F4BE3DF40D7
#5  0x7F4BE3E2D393
#6  0x7F4BE3E3966D
#7  0x5D3A91 in __glimmer_sparse_slap_MOD_slap_solve at glimmer_sparse_slap.F90:210 (discriminator 1)
#8  0x545407 in __glimmer_sparse_MOD_sparse_solve at glimmer_sparse.F90:237
#9  0x5457A8 in __glimmer_sparse_MOD_sparse_easy_solve at glimmer_sparse.F90:373 (discriminator 1)
#10  0x487A6D in thck_evolve at glide_thck.F90:561
#11  0x48AB5F in __glide_thck_MOD_thck_lin_evolve at glide_thck.F90:170
#12  0x473ACA in __glide_MOD_glide_tstep_p2 at glide.F90:862
#13  0x439344 in __cism_front_end_MOD_cism_run_dycore at cism_front_end.F90:302
#14  0x439986 in __gcm_cism_interface_MOD_gci_run_model at gcm_cism_interface.F90:118
#15  0x438D03 in cism_driver at cism_driver.F90:49

higher-order/slab output (serial):

$ /slab.py 
Using Scientific.IO.NetCDF for netCDF file I/O
Writing slab.nc
Running CISM for the confined-shelf experiment
==============================================

Executing serial run with:  ./cism_driver slab.config


 CISM dycore type (0=Glide, 1=Glam, 2=Glissade, 3=AlbanyFelix, 4 = BISICLES) =            2
 g2c%which_gcm (1 = data, 2 = minimal) =            0
 call cism_init_dycore
  * FATAL ERROR : ice limit (thklim) is too small for Glissade dycore
 Fatal error encountered, exiting...
 PARALLEL STOP in /home/fjk/Documents/Code/cism-dev/libglimmer/glimmer_log.F90 at line          178
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD 
with errorcode 1001.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

higher-order/slab output (parallel):

$ /slab.py -m 2
Using Scientific.IO.NetCDF for netCDF file I/O
Writing slab.nc
Running CISM for the confined-shelf experiment
==============================================

Executing parallel run with:  mpirun -np 2 ./cism_driver slab.config


 CISM dycore type (0=Glide, 1=Glam, 2=Glissade, 3=AlbanyFelix, 4 = BISICLES) =            2
 g2c%which_gcm (1 = data, 2 = minimal) =            0
 call cism_init_dycore
  * FATAL ERROR : ice limit (thklim) is too small for Glissade dycore
 Fatal error encountered, exiting...
 PARALLEL STOP in /home/fjk/Documents/Code/cism-dev/libglimmer/glimmer_log.F90 at line          178
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD 
with errorcode 1001.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 1903 on
node pc0101123 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[pc0101123:01901] 1 more process has sent help message help-mpi-api.txt / mpi-abort
[pc0101123:01901] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

Copied from original issue: E3SM-Project/cism-piscees#28

time dependent forcing applied at incorrect time in timestep

From @stephenprice on March 10, 2015 18:17

In cism_driver/cism_front_end.F90, we have the option to read time dependent forcing fields from an input .nc file. Currently, we use this to read in an apply temperature, smb, and Dirichlet BC info (mask for where Dirichlet BCs are to be applied as well as the u and v vel fields to apply there). This capability is demonstrated in one supported test case (see ./tests/higher-order/dome/dome.forcing.*).

We found, however, that for implementing Dirichlet boundaries the u and v fields need to be applied at the start of the time step, whereas the smb and temperature should be applied at the end of the time step. The current implementation only allows for one choice (all fields are read in and applied at the same time).

Right now, the problems that arise from this inconsistency are worse if we leave the forcing at the end of the time step as opposed to moving it to the start of the time step (we can't make progress on simulations with time dependent vel BCs applied, a priority for validation), so we will move it to the start. Thus, the smb and sfc air temps are being applied as bcs / source terms at the wrong part of the time step. This can / should eventually be fixed by allowing for different forcing fields to be applied at different parts of the time step.

Copied from original issue: E3SM-Project/cism-piscees#19

Encapsulate module-level data in derived types, for parallel module and maybe others

To support multiple ice sheet instances (e.g., Greenland & Antarctica in a single simulation), we can't have module-level data; instead, data need to be encapsulated in instances of a derived type. This is mostly already the case. The main – and possibly only – exception is the parallel module (own_ewn, own_nsn, and many others).

In April, 2015, @whlipscomb said:

The only modules I can think of (besides parallel) that contain free-floating variables (i.e., not part of a derived type) are glimmer_paramets and glimmer_physcon. Assuming that rhoi and similar variables have the same values everywhere, I don’t think this will be a problem. But if we wanted different values of rhoi (say) for different ice sheets, we could make it part of the glide_paramets derived type.

@whlipscomb has agreed to take this on.

SLAP issue with cce (cray compiler)

A runtime error is generated in libglimmer-solve/SLAP/xersla.f with the cce/11.0.2 compiler.

A fix is to change dimension messg(nmessg) to be
character*(*) in the file xersla.f

Some python files have python2 type style statements

I found this running the python reformatter "black" on all of CTSM telling it to target python-3.7 and it pointed out the following files have python 2 type print statements so wouldn't work with python-3. Fixing it would be simple just changing lines
with

print "thing"

to

print( "thing" )

so the fix is straight-forward.

components/cism/source_cism/tests/dome/netCDF.py:
components/cism/source_cism/tests/halfar/halfar_results.py:
components/cism/source_cism/tests/halfar/netCDF.py:
components/cism/source_cism/tests/ismip-hom/netCDF.py:
components/cism/source_cism/tests/ismip-hom/plotISMIP_HOM.py:
components/cism/source_cism/tests/MISMIP3d/mismip3d.code/mismip3dRun.py:
components/cism/source_cism/tests/MISMIP3d/mismip3d.code/mismip3dSetup.py:
components/cism/source_cism/tests/MISMIP3d/mismip3d.code/mismip3dWriteGL.py:
components/cism/source_cism/tests/MISMIP/mismip.code/mismipPlotGL.py:
components/cism/source_cism/tests/MISMIP/mismip.code/mismipRun.py:
components/cism/source_cism/tests/MISMIP/mismip.code/mismipSetup.py:
components/cism/source_cism/tests/MISMIP/mismip.code/mismipWriteGL.py:
components/cism/source_cism/tests/MISOMIP/mismip+/mismip+Run.py:
components/cism/source_cism/tests/MISOMIP/mismip+/mismip+Setup.py:
components/cism/source_cism/tests/MISOMIP/mismip+/mismip+WriteGL.py:
components/cism/source_cism/tests/netCDF.py:
components/cism/source_cism/tests/new/netCDF.py:
components/cism/source_cism/tests/ross/netCDF.py:
components/cism/source_cism/tests/ross/plotRoss.py:
components/cism/source_cism/tests/shelf/netCDF.py:
components/cism/source_cism/tests/slab/netCDF.py:
components/cism/source_cism/tests/slab/plotSlab.py:
components/cism/source_cism/tests/stream/netCDF.py:
components/cism/source_cism/tests/unsupported/exact-isothermal/scripts/create_test.py:
components/cism/source_cism/tests/unsupported/exact-isothermal/scripts/plot_verif.py:
components/cism/source_cism/tests/unsupported/exact-isothermal/scripts/run_verif.py:
components/cism/source_cism/tests/viewNetCDF.py:
components/cism/source_cism/utils/f90_dependency_tool/f90_dependencies.py:

Are changes needed to support a Gregorian (leap year) calendar?

From @billsacks on April 8, 2016 1:25

For now this is more of a question than an issue... we can change it into an issue if it seems warranted:

Do people feel that any changes are needed for CISM to support a Gregorian (leap year) calendar? Typically CESM operates with a NOLEAP calendar, but some applications use it with a Gregorian calendar.

I'm pretty sure that CISM currently assumes a 365-day (no-leap) calendar (e.g., the scyr parameter is hard-coded to 365 days). I could imagine this possibly causing small errors both in terms of averaging quantities sent to / from the climate model, and also in terms of the number and size of the dynamics timesteps that fit within a year.

There are really two questions:

(1) Are people concerned about the small (but systematic) errors this will cause?

(2) Might the current assumptions cause any major problems, such as causing the run to crash?

This one will probably take some experimentation. I'm not sure how to test this, which is part of my motivation for filing this as an issue to come back to later.

Copied from original issue: E3SM-Project/cism-piscees#54

Remove SLAP at some point

From @billsacks on August 9, 2016 22:7

SLAP is old code that causes problems, particularly with the NAG compiler. We'd like to remove it at some point - once we have removed glide from CISM.

This won't happen in the immediate future, but I'm opening this issue to keep track of specific things that should be done once we can remove SLAP.

Copied from original issue: E3SM-Project/cism-piscees#60

Runtime error with nag6.2 compiler on hobart

This problem comes in on hobart with the nag6.2 compiler. We didn't see this with the nag6.1 compiler.

This seems to mostly be for tests with DEBUG enabled.

Here's the runtime error message for /SMS_P48x1_D_Ld5.f10_f10_musgs.I2000Clm50Cn.hobart_nag.clm-default

/fs/cgd/data0/erik/ctsm_nfix/components/cism/source_cism/libglide/glide_diagnostics.F90, line 120: CaRuntime Error: /fs/cgd/data0/erik/ctsm_nfix/components/cism/source_cism/libglimmer/parallel_mpi.F90, line 4487: Assignment to XOUT affects dummy argument XIN
Program terminated by fatal error
/fs/cgd/data0/erik/ctsm_nfix/components/cism/source_cism/libglimmer/parallel_mpi.F90, line 4487: Error occurred in PARALLEL:PARALLEL_REDUCE_MAXLOC_INTEGER
/fs/cgd/data0/erik/ctsm_nfix/components/cism/source_cism/libglimmer/parallel_mpi.F90, line 3225: Called by PARALLEL:PARALLEL_LOCALINDEX
/fs/cgd/data0/erik/ctsm_nfix/components/cism/source_cism/libglide/glide_diagnostics.F90, line 120: Called by GLIDE_DIAGNOSTICS:GLIDE_INIT_DIAG

If no problem type is specified in .config file, "EISMINT" error occurs

From @matthewhoffman on July 28, 2015 15:23

If no problem type is specified in the .config file, CISM aborts with:
FATAL ERROR : No EISMINT forcing selected

There is no documentation for how a user should set up a user-defined simulation that is not one of the canned test cases. The modules that parse the .config file look for different domain types as a config section in square brackets. For instance, an EISMINT-2 test has a [EISMINT-2] section (with some items beneath it), a dome test has a [DOME-TEST] section (with no items beneath it). Currently the code is expecting one of these problem types to be specified, and if one isn't then it throws the error "FATAL ERROR : No EISMINT forcing selected".

A workaround for running a user-defined problem is to add the line:
[DOME-TEST]
to the .config file. The dome test is a generic problem type and can be used for any user-defined simulations. This line simply tells CISM that nothing special needs to occur in setting up the problem. Ideally we should generalize this so that for a generic problem either

  1. no problem type needs to be defined at all; or
  2. we have something like a [GENERIC] problem type that makes it explicit that nothing unusual needs to occur inside the model set up.

Copied from original issue: E3SM-Project/cism-piscees#38

For Antarctica simulations in CESM, SMB can accumulate in the ocean

I noticed that, in some of my Antarctica tests in CESM, SMB accumulates in areas that are in the ocean. I have noticed this in an I compset test that uses very coarse land resolution (10 deg x 15 deg: /glade/scratch/sacks/ERS_Vnuopc_Ly3.f10_f10_ais8_mg37.I1850Clm50SpGa.cheyenne_intel.20210811_223534_2a9bfb). It's possible that this problem won't arise in more realistic production resolutions (it doesn't appear in a T compset where the land forcing is at f09_g17 resolution; I haven't tried an I compset at that 1-degree resolution), but I still think that this is a real issue that should be fixed.

Here is a figure showing SMB from CISM's history file from the above test:

20210825_154540_6UE7AI

This positive SMB in the periphery of the grid (which I think should be open ocean) leads to an accumulation of ice thickness in these grid cells.

Here is the corresponding SMB map from the coupler history file:

20210825_163521_wvZ5Rz

From digging in to some of the coupling fields and from comparing the behavior with that in Greenland simulations, I think what's going on is:

  • The icemask correctly starts as 0 in open ocean points in Antarctica.
  • There is always potentially a non-zero SMB passed in areas outside of the ice sheet: CTSM generates SMB outside as well as inside the icemask, and assumes that CISM will discard any SMB sent outside the icemask.
  • For Greenland runs, CISM appears to properly be discarding SMB sent outside the icemask. However, for Antarctica runs, there are some grid cells outside the icemask that accept this SMB and start growing ice.
  • It appears that these problematic points start with usurf = 0. So it seems like the problem is that some usurf = 0 points are able to accept SMB in Antarctica, in contrast to Greenland. Interestingly, there is a swath of points in the Antarctica run that seems to zero out the SMB, but beyond that swath, it accepts SMB.

This is a problem not only because it leads to ice growth in the open ocean, but also because I think it would break conservation in a fully-coupled run: CTSM assumes that the icemask (actually icemask_coupled_fluxes, but in practice they are effectively the same) dictates where CISM is able to receive SMB. Conversely, it assumes that, if icemask_coupled_fluxes is 0, then CISM is not able to receive SMB there, and so CTSM should send any generated SMB directly to the ocean, via the snow capping flux. But, for system-wide conservation, that means that CISM needs to be consistent in this: If CISM says that icemask_coupled_fluxes is 0 (which is typically the case over open ocean), then it needs to discard any SMB that is sent there. I'll admit that this coupling is rather subtle and error-prone, and could probably use to be reworked (probably with the coupler/mediator handling some of this logic itself), but for now that's how things work.

cesm build of cism is very slow

CISM is one of the slowest builds in a CESM BMOM case.
I clocked 409 s on cheyenne in case PFS.ne30pg3z58_t061.B1850MOM.cheyenne_intel

Minor issues with SLAP

I'm not sure if this is worth fixing (see #14), but @grnydawn points out:

I recently evaluated a fortran parser for my own work, and found several things that we may improve code quality of CESM in terms of Fortran standard conformance.

In "cism/source_cism/libglimmer-solve/SLAP/xersla.f" and other several files, there is a line similar to "format (15h error number =,i10)". By standard, a string literal should be wrapped by quotation mark, either single or double. But again, compiler generally accepts it. Right form is "format ("15h error number =",i10)".

Within SLAP, from a quick search, I see lines like this in xersla.f but not in other files: From git grep -n 'format *(':

xersla.f:289:   21          format ('(11x,21hin above message, i',i1,'=,i',i2,')   ')
xersla.f:295:   23          format ('(11x,21hin above message, r',i1,'=,e',
xersla.f:303:   30          format (15h error number =,i10)
xersla.f:383:   10       format (32h0          error message summary/
xersla.f:394:   40       format (41h0other errors not individually tabulated=,i10)

@grnydawn, did you see other instances of this?

additional testing of centered vs. upwided surface gradient calculations in Glissade dycore

From @stephenprice on March 23, 2015 21:12

When using incremental remapping (or FO upwinding) for thickness evolution of idealized test cases (e.g. Halfar) checkerboard surface elevation patterns have been observed to develop when using a centered sfc elevation gradient calculation scheme. An "upwinding" sfc elev. gradient scheme was introduced and shown to alleviate the problem.

However, for realistic test cases (e.g. 4 km Greenland), exactly the opposite behavior has been observed; the centered gradient scheme results in smooth sfc elev (and velocity) fields and the upwinded gradient scheme introduces what appears to be a checkerboard mode after only a few years of fwd integration (see example figs below).

This behavior has been observed in multiple branches of the code (including devel), when using different dynamical cores, and when using either IR or FO upwinding for advection. A suggestion for further testing from W. Lipscomb is as follows:

"One test worth trying would be to set accuracy_flag_in = 1 in glissade_velo_higher.F90 in the call to the gradient routines. This would generate a 1st-order rather than 2nd -order accurate one-sided gradient, which might be less prone to checkerboard noise (or at any rate, the differences between 1st order and 2nd order might tell us something)."


Image WITH checkerboarding in sfc speed field after ~20 years of integration (when using the upwinded elevation gradient scheme).
check

Image with NO checkerboarding in sfc speed field after ~20 years of integration (when using the centered elevation gradient scheme).
nocheck

Copied from original issue: E3SM-Project/cism-piscees#25

Add time_bounds to history files

From @sherimickelson

Do you know if there were plans to add a time bounds variable in the cism output files? It seems as if cism is the only component that doesn't do this and this variable is being requested by cmip6. I could automatically figure this out, but I can see it being error prone because there will have to be a lot of assumptions coming from my code that will likely break if any conditions change. It seems like it would be more reliable coming from the model itself. Would it be possible to add this in for the 2.1 release?

writestats.c fails with intel/2023.0.0 icx compiler

The following error was encountered when compiling writestats.c,
adding header file <ctype.h> as suggested solves the problem.

/glade/work/jedwards/sandboxes/cesm2_x_alpha/components/cism/source_cism/libglimmer/writestats.c:70:10: error: call to undeclared library function 'isalnum' with type 'int (int)'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
  while (isalnum(resfname[i]) && i < CFG_LEN) i++;
         ^
/glade/work/jedwards/sandboxes/cesm2_x_alpha/components/cism/source_cism/libglimmer/writestats.c:70:10: note: include the header <ctype.h> or explicitly provide a declaration for 'isalnum'
1 error generated.

Rework CISM's time management

We'd like to rework CISM's time management, including:

  • Improve specification of dt to allow more possible values

  • Allow use of Gregorian calendar

  • Switch to integer-based internal time representation to avoid accumulating roundoff errors

empty 'results' file created during each run

From @matthewhoffman on August 20, 2015 16:19

It looks like the 'results' file gets created in libglimmer/writestats.c, line 95. I'm not sure what this code is doing but I'm pretty sure we aren't using it.

Copied from original issue: E3SM-Project/cism-piscees#41

bug in libglimmer/writestats.c

This issue appears when using the intel icx compiler. The file writestats.c is missing a header ctype.h

Can a branch of release-cism2.1.03 be created with this fix so that
I don't have to keep repeating to users that they need to make this change to run on derecho?

DIVA is inaccurate when flwa varies strongly vertically

From @matthewhoffman on October 4, 2016 20:33

The DIVA and BP velocity solvers calculate quite similar results for standard test cases (e.g. ISMIP-HOM), but recent testing has revealed that they generate quite different results when flwa varies significantly vertically.

Here is an example of two runs with identical setup except for which solver is used:
screen shot 2016-10-04 at 11 05 27 am

I isolated the issue - it only occurs when flwa varies vertically as in the above example. If flwa is made vertically constant, then the two solvers give similar answers:
screen shot 2016-10-04 at 10 59 45 am

It makes sense that the results of DIVA would be sensitive to the details of how flwa (or the effective viscosity) is integrated vertically. This may take some careful thought and testing to resolve in a way that allows DIVA to yield results comparable to BP. For example, flwa can vary vertically by two or more orders of magnitude, meaning a straight arithmetic average may be in appropriate. Similarly, consideration may be needed for the vertical arrangement of flwa values - presumably soft ice at the bed shuold affect the depth-integrated effective viscosity much more than soft ice near the surface.

For the record, I was using this flwa profile (with uniform vertical levels):

for i in range(nx):
   for j in range(ny):
     flwastag[0,:,j,i] = (
6.24821E-17,
5.98918E-17,
3.78746E-17,
1.86614E-17,
9.50214E-18,
7.66194E-18,
7.9094E-18,
1.06029E-17,
3.31514E-17,
#6.71515E-17)  #E=1
2.01e-16)  #E=3

which comes from the temperature profile in this paper:
Ryser, C., M. P. Lüthi, L. C. Andrews, M. J. Hoffman, G. A. Catania, R. L. Hawley, T. A. Neumann, and S. S. Kristensen (2014), Sustained high basal motion of the Greenland ice sheet revealed by borehole deformation, J. Glaciol., 60(222), 647–660, doi:10.3189/2014JoG13J196.
and using the updated Cuffey and Paterson flwa formula.

Copied from original issue: E3SM-Project/cism-piscees#61

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.