Giter Site home page Giter Site logo

access-om2's Introduction




DEPRECATION NOTICE

ACCESS-NRI has taken on responsibility for ongoing support of ACCESS-OM2, so this repository is not being updated and will eventually be archived.

New ACCESS-OM2 experiments and code development should use the latest code release from ACCESS-NRI/ACCESS-OM2 and configurations from ACCESS-NRI/access-om2-configs, and all new issues should be posted on one of those repositories.


ACCESS-OM2

ACCESS-OM2 is a global coupled ocean - sea ice model being developed by COSIMA.

ACCESS-OM2 consists of the MOM 5.1 ocean model, CICE 5.1.2 sea ice model, and a file-based atmosphere called YATM coupled together using OASIS3-MCT v2.0. ACCESS-OM2 builds on the ACCESS-OM (Bi et al., 2013) and AusCOM (Roberts et al., 2007; Bi and Marsland, 2010) models originally developed at CSIRO.

ACCESS-OM2 comes with a number of standard configurations in the control directory. These include sea ice and ocean at a nominal 1.0, 0.25 and 0.1 degree horizontal grid spacing, forced by JRA55-do atmospheric reanalyses.

ACCESS-OM2 is being used for a growing number of research projects. A partial list of publications using the model is given here.

Downloading

This repository contains many submodules, so you will need to clone it with the --recursive flag:

git clone --recursive https://github.com/COSIMA/access-om2.git

To update a previous clone of this repository to the latest version, you will need to do

git pull

followed by

git submodule update --init --recursive

to update all the submodules.

Where to find information

The v1.0 model code, configurations and performance were described in Kiss et al. (2020), with further details in the draft ACCESS-OM2 technical report. The current code and configurations differ from v1.0 in a number of ways (biogeochemistry, updated forcing, improvements and bug fixes), as described by Solodoch et al. (2022), Hayashida et al. (2023), Menviel et al. (2023) and Wang et al. (2023).

Model output can be accessed by NCI users via the COSIMA Cookbook.

For information on downloading, building and running the model, see the ACCESS-OM2 wiki.

NOTE: All ACCESS-OM2 model components and configurations are undergoing continual improvement. We strongly recommend that you "watch" this repo (see button at top of screen; ask to be notified of all conversations) and also watch all the component models, whichever configuration(s) you are using, and payu to be kept informed of updates, problems and bug fixes as they arise.

Requests for help and other issues associated with the model, tools or configurations can be registered as ACCESS-OM2 issues.

Conditions of use

We request that users of this or other ACCESS-OM2 model code:

  1. consider citing Kiss et al. (2020) (http://doi.org/10.5194/gmd-13-401-2020), and also the other papers above detailing more recent improvements to the model

  2. include an acknowledgement such as the following:

    The authors thank the Consortium for Ocean-Sea Ice Modelling in Australia (COSIMA; http://www.cosima.org.au) for making the ACCESS-OM2 suite of models available at https://github.com/COSIMA/access-om2.

  3. let us know of any publications which use these models or data so we can add them to our list.

access-om2's People

Contributors

a-parkinson avatar aekiss avatar aidanheerdegen avatar andyhogganu avatar marshallward avatar navidcy avatar nichannah avatar nicholash avatar rmholmes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

access-om2's Issues

Ocean heat increasing

025deg_jra55_ryf_spinup7
exe: matm_jra55_0609e5ad.exe
exe: fms_ACCESS-OM_030fb1f2.x
exe: cice_auscom_1440x1080_480p_fe730022.exe
Has no trend in salt or eta_t but total ocean heat is increasing (and potential energy decreasing)
with a linear trend over 40 years with no sign of levelling off. Trends are a fairly large fraction of the annual cycle. Might a net input of heat over the JRA55-RYF9091 annual cycle cause this? Or would we expect this to level off? @AndyHoggANU do you see this at 1 deg? I'll look at cumulative heat fluxes tomorrow to try to narrow down the culprit.

see /g/data3/hh5/tmp/cosima/access-om2-025/025deg_jra55_ryf_spinup7/output*/ocean/ocean_scalar.nc

screen shot 2017-09-10 at sun 10-9 9 36pm

screen shot 2017-09-10 at sun 10-9 9 52pm

screen shot 2017-09-10 at sun 10-9 9 42pm

All models should give a backtrace when exiting on an error

Often when the model crashes it's very difficult to find out why. There may be an error message but it is in an unknown file or badly formatted. It would be good to find a consistently manage error messages across the different models.

This is related to this one: #26

matm compile error

Hi folks,

Following your beautiful instructions I have come across the following compiler error:

[pas561@raijin3 bin]$ cd $ACCESS_OM_DIR/src/matm
cd /short/v45/pas561/access-om2/src/matm
[pas561@raijin3 matm]$ make
gmake
build/build.sh core

...

mpif90 -c -r8 -i4 -O2 -align all -g -traceback -w -fpe0 -ftz -convert big_endian -assume byterecl -assume buffered_io -check noarg_temp_created -I. -I/short/v45/pas561/access-om2/src/matm/source -I. -I/include -I/short/v45/pas561/access-om2/src/oasis3-mct//Linux/build/lib/psmile.MPI1 -I/short/v45/pas561/access-om2/src/oasis3-mct//Linux/build/lib/pio -I/short/v45/pas561/access-om2/src/oasis3-mct//Linux/build/lib/mct cpl_netcdf_setup.f90
cpl_netcdf_setup.f90(7): error #7002: Error in opening the compiled module file. Check INCLUDE paths. [NETCDF]
use netcdf
----^
cpl_netcdf_setup.f90(27): error #6404: This name does not have a type, and must have an explicit type. [NF90_NOERR]
if (status /= nf90_noerr) then ....

Andy Hogg doesn't get this error. We tried loading different types of netcdf modules.

Cheers
Paul

Model crash in year 5

I have been running the 1° ACCESS-OM2 happily all weekend, with timesteps up to 3600 and ~23 walltime mins per model year. So, from day 1 of model year 71, I decided to extend my 2-year simulations up to 5 years, to minimise time spent in the queue. When I did that I had a curious error with MATM:

MATM istep1: 43463 idate: 751218 sec: 0

MATM: error - from NetCDF library
Opening /g/data1/ua8/JRA55-do/RYF/v1-3/RYF.snow.1990_1991.nc
NetCDF: HDF error


The InfiniBand retry count between two MPI processes has been
exceeded. "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

The total number of times that the sender wishes the receiver to
retry timeout, packet sequence, etc. errors before posting a
completion error.

...

That is, it got within 13 days of 5 years and then couldn't read the input file. I checked, and got the same error on the same model day!

For now, I have dropped back to 2-year runs and am now beyond year 77. My main worry here is that this issue might be due to a bug in MATM, which usually isn't fatal, but may still be causing problems in other cases. Any ideas?

(If you want to see more error details, look at the jobs that failed on Nov 5 in /home/157/amh157/access-om2/control/1deg_jra55_ryf/archive/error_logs/).

Initial values for Ice_ocean_boundary arrays in ocean_solo.F90

Oops I meant to raise this issue here and not on the main mom repository....

I think setting these values to zero was a mistake that can lead to errors if there is problem with the coupling. I've had a look at the ACCESS-CM2 version and it looks to me that they always skip the first pass of forcing fields from the ice to ocean. i.e. the first coupling period has all zero fluxes being passed to the ocean. This means that reproducibility will be impossible with say, two six month runs versus a 1 year run.

I'd suggest changing them to something ridiculous to force a crash if they don't get values from the coupler. Let's say -1.d100

I also think the ice_ocn_bnd_from_data call should be after call to the external coupler but that's another issue.
@ofa001 @arnoldsu could you suggest this for ACCESS-CM2?

Trivial issues with output and error files

A minor thing which would be nice to fix. We are getting two warnings which are swamping the access.err files:

WARNING from PE 186: set_date_c: Year zero is invalid. Resetting year to 1

WARNING from PE 0: diag_manager_mod::register_diag_field: module/output_field ocean_model/mass_river_on_nrho NOT found in diag_table

They don't affect the model of course, it just makes it harder to look through the output files. Would be nice to clean up if we know why they are happening.

Processor masking

We need to do processor masking in the ocean to reduce the number of wasted CPUs. This will require generation of mask table and changes to the MOM OASIS setup.

All models should print their git hash on startup

I think it would make sense for all models (MOM, CICE, MATM) to print our their git hash ids on start-up. This will be a sanity check to avoid copying over/using bad/old executables. This mistake has caused me trouble.

Checking the hash id would involve running the exe standalone without MPI and it will print this before crashing. Or the string table can be dumped with a tool like readelf, objdump, nm.

I know that @akiss has a tool that also helps with this problem also.

stf in ocean_sbc.F90 bad when use_waterflux=.false.

From Siobhan:

"melt on line 3593. I noticed when I was comparing ocean_sbc.F90 that there was one difference in the T_prog(index_salt)%stf(ii,jj) loop line 3593 in your code v our code in the coupled model.

You have left the “melt” rather than “wfimelt +wfiform” for the ACCESS case."

This code is not used but this should still be fixed. Thanks @ofa001

Bering Strait needs widening in 1deg configuration

There is reference to a problem noted by @AndyHoggAU here:

#21

There was also a nice plot of high velocities in the Bering Strait but I can't find it.

To try to resolve this issue the following changes have been made:

  • Widen the Bering Strait by 1 grid cell on the Eastern side

And while I was there:

  • Widened the Red Sea by one grid cell on the Western side
  • Widened the Persian Gulf by one grid cell on the Eastern side

It was also necessary to remove the land-sea mask from the Oasis restart files.

Review ocean_sbc.F90 for ACCESS-OM/CM case

There have been some changes to ocean_sbc.F90 and there are still a few points of confusion wrt CICE coupling (at least in my mind). So this issue is a place to gather observations. e.g.

  1. wfimelt, wfiform include all precip falling onto snow. i.e. it is total fresh water flux coming from grid boxes with ice. When using zero_net_water_coupler=.true. this is important to know. Perhaps we should change the definition of these fields to not include precipitation over ice?

  2. there is a mistake at line 3593 where melt() is being used instead of wfimelt + wfiform. This code is not used but it should be fixed anyway. See issue mom-ocean/MOM5#192

  3. I would like to bring to the attention of @ofa001 that 3527 has changed. https://github.com/mom-ocean/MOM5/blob/f6f4e4ae7f0fb966b0e2ba8fc33ae0caa96950a2/src/mom5/ocean_core/ocean_sbc.F90#L3527 . (wfimelt+wfiform) was previously used here. I don't think this has an impact on ACCESS-CM because it is under zero_net_water_coupler=.true.

  4. given #18 I think there is a case for a water budget sanity check / test that encompasses both MOM and CICE.

Remove unused input files

There are quite a few files in the ACCESS-OM2 test case input dataset which I suspect are unused.

For example access-om2/input/mom_025deg/ includes things like ncar_precip.nc, near_rad.nc. Are these needed?

Bring model input creation into access-om2 repository

This issue is a suggestion that others may have thoughts about.

I would like to make the creation of inputs for access-om2 more transparent. For example, there are two solutions to the weights creation issue (#52):

  1. we create new weights using tools from other repos and individual experience and put these in the input tarball.
  2. we include code in the access-om2 repo that can recreate the weights. Ideally it should be possible for users (with some time) to understand and adapt this code to changing needs. It would allow them, for example, to do a detailed analysis of the pros and cons of certain remapping schemes.

Option 2 takes a broader view of access-om2 to not just be the model code but also the necessary setup or input creation code (of which there is a rapidly growing amount!).

Write a release script

We should have an automated way to do releases. This would include, among other things:

  1. build all models and update binaries
  2. update input tar balls
  3. update hashes in config.yaml
  4. update documentation with new hashes

The model crashes intermittently

This is a bit of a catch-all issue for all those intermittent model crashes.

Please add a comment below to document any new crashes. Please include:

  1. output of:
    $ cd $ACCESS_OM_DIR
    $ git rev-parse HEAD
    
  2. the experiment name, e.g. 1deg_jra55_ryf or 025deg_jra55_ryf or 01deg_jra55_ryf
  3. the model time when the crash occurred.
  4. the error message if you can find one. We expect that many of these will be salt or temp going out of range in the ocean. Output from this error can be found in access-om2.err . In this case please include the geographic location (x, y, z) and model indices (I, j, k) where the out of range value occurs.
  5. the tilmestep
  6. thoughts and next steps

Ice restarts are accumulating

This is more of a payu issue I think. It looks like the ice restarts are accumulating in the restart directory. This means there will be many copies of the same restart. For example take a look at:

/g/data3/hh5/tmp/cosima/access-om2-025/025deg_jra55_ryf_spinup7/restart055/ice

As part of this fix we should also delete all of the duplicate restarts that must exist in hh5.

Check units and ranges for forcing fields

Strange things happen when forcing fields are in wrong/unexpected units. It can take a long time to find these problems, for example the confusion we had when we switched the CORE to JRA55 runoff fields and the units changed.

It would save a lot of time in the long run if we did some sanity checking on incoming fields. In this case of MATM this could be done without any hit to overall performance (because it does nothing most of the time anyway).

More efficient remapping z => rho etc.

Yesterday's talk by Stephanie reminded me some unfinished work I started from a while back but didn't have time to follow through on.

In routines like diagnose_tracer_zrho_on_rho we want to put things on (neutral) density surfaces but there are loops over a large number of density levels (80?) times the 3D grid. These are pretty time consuming and I believe the masking operations can be made far more efficient.

If we calculate the maximum and minimum values of the model density on each layer it should be possible to short circuit the loops and achieve massive efficiencies. The same idea can be applied to
calculating transports on density surfaces. I remember running some toy cases and the expected efficiencies were an order of magnitude or more.

e.g.

! All land in a level will return -/+huge() rather than max/min
do k = 1,nk
max_den_lev(k)=maxval(Dens%potrho(isc:iec,jsc:jec,k),mask=Grd%tmask(isc:iec,jsc:jec,k)>0)
min_den_lev(k)=minval(Dens%potrho(isc:iec,jsc:jec,k),mask=Grd%tmask(isc:iec,jsc:jec,k)>0)
enddo
max_den=maxval(max_den_lev)
min_den=minval(min_den_lev)

! Keep looping same
do n=1,potrho_nk
if(Dens%potrho_ref(n) > max_den) exit
do k=nk-1,1,-1
if(min_den_lev(k+1) == huge(min_den_lev(k))( cycle ! All land in next level (could modify nk instead of this test)
if(Dens%potrho_ref(n) > max_den_lev(k+1)) exit ! > than all deeper densities in layer and above
if(Dens%potrho_ref(n) < min_den_lev(k)) cycle ! < than all shallower densities in layer
...
enddo
enddo

! Alternatively swap looping
do k=nk-1,1,-1
if(min_den_lev(k+1) == huge(min_den_lev(k))) cycle ! All land in next level (could modify nk instead of this test)
do n=1,potrho_nk
if(Dens%potrho_ref(n) > max_den_lev(k+1)) exit ! > than all deeper densities in layer and below
if(Dens%potrho_ref(n) < min_den_lev(k)) cycle ! < than all shallower densities in layer
...
enddo
enddo

Anybody feel like investigating this further?

Restart times offset

I am finding a minor problem with my 1° ACCESS-OM2 spinup8 case which might be a pointer to some inconsistencies in the way the coupled model deals with restarts.

In this simulation, I stopped after 182 years, because I wanted to restart from scratch to run another case with massless ice (spinup9). Then I decided to return back to spinup8 (massless ice didn't help the stability problems). I copied the restart090 file back to the archive and set it running. MOM picked up from day 1 of 183 as expected (although it gave the traditional first message saying it would start from zero):

==>Note: Time%Time_init = time stamp at very start of the MOM experiment is given by
yyyy/mm/dd hh:mm:ss = 1/ 1/ 1 0: 0: 0

==>Note: Time%model_time = time stamp at start of this leg of the MOM experiment is
yyyy/mm/dd hh:mm:ss = 183/ 1/ 1 0: 0: 0

but MATM did actually return to time zero:

MATM (init_calendar) idate0 = 10101

MATM istep1: 47 idate: 10102 sec: 0

What this means is that I have CICE output with a date starting from zero, and which has a different date from the MOM output of the same run! It may make analysis a little tricky.

I'm not yet sure why this has happened -- maybe I should have restarted in a different way, but I think it is important to ensure consistent dates between outputs from the different component models ...

OASIS segfaults when doing traceback

A vanilla error in OASIS can trigger a traceback. Unfortunately this sometimes results in a seg fault which is very confusing. If this can't be fixed then suggest remove the traceback altogether.

Check that physical constants are consistent across models

We've been tripped up several times by differences in physical constants between MOM and CICE and between the core MOM and ACCESS MOM code.

We need to implement some tests so that this doesn't happen again.

In addition it might might be a good idea to set these values in one place only rather than have them defined separately in different submodules.

Make matm block after each forcing send.

Since #6 the atm will send a large number of forcing fields at once since it is not blocking on any comms. This issue introduces a block so that MPI buffers are not in danger of overfilling. MATM will only send one set of coupling fields at a time and wait from an acknowledgement from CICE before proceeding.

Remove coupling from ice -> atmosphere

Presently the atmosphere will block on a receive before reading and then sending forcing fields to the ice. This means that the ice and ocean are waiting on MATM file reads.

It should be much better for MATM to do the read and have the fields sent/ready so that the ice does not have to wait at all. This can easily be achieved by removing the MATM receive. Since MATM doesn't do anything with this field it's not needed in any case.

River runoff is too concentrated

The 0.25 configuration is crashing near river outflow points. We assume this is to do with high fresh water concentration.

To fix this we should introduce river spread into the online runoff remapping.

CICE5 URL

In .gitmodules, the first 3 submodule refer to a URL, while the CICE5 submodule refers to git@github...
This simply means that users without the ssh keys set up fail on that step.
Better the change this final line to:
url = https://github.com/OceansAus/cice5.git
???

Various fixes needed to improve installation experience

Thanks to @AndyHoggANU and others for these.

  • the last line in .gitmodules should be a https:// address, otherwise it won’t work for gumbies like myself & Kial who don’t have ssh keys set up. I put an issue on this one in github.
  • get_input_data.py doesn’t work without sh, which has to be installed by the user, and requires python 2.7.11 … would help if that was more generic.
  • making worked well, but we would suggest putting the pytest instructions first, and making it more explicit that there are two ways of doing this.
  • the suggest commands to copy the executables should be “cp” not “ls”. Also, include optional commands for the other resolutions of cice.

CICE -> MOM coupling arrays are set before CICE tilmestep complete

In the CICE mainloop the coupling arrays (e.g. salt and water fluxes) are collected in the middle of the ice timestep. i.e. after the first thermo calculations but before second thermo, dynamics and radiation. It's not clear why this has been done because a likely effect is that the coupling arrays have not finished being updated.

For example fresh() is stored ready for the coupler before CICE has added rain passing through the ice.

This is not the way things work with the other drivers e.g. CESM and ACCESS-CM.

On the other hand step_therm2 starts with a comment:
! Driver for thermodynamic changes not needed for coupling:
! transport in thickness space, lateral growth and melting.
!

Error in 1deg_core_nyf, get_field_dim_lens: Unsupported dimension name

Abhishek is trying to run the access-om2 1 deg CORE NYF experiment and is getting an inscrutable netCDF error (I think)

get_field_dim_lens: Unsupported dimension name

The access-om2.out file is at this point when it happens:

MOCN: _get_localcomm_ OK! il_commlocal=            3
Reading setup_nml
Reading grid_nml
Reading tracer_nml
Reading thermo_nml
Reading dynamics_nml
Reading shortwave_nml
Reading ponds_nml
Reading forcing_nml
Diagnostic output will be in file 
ice_diag.d                                                                     
 
Reading zbgc_nml

I don't think the traceback is useful, as it is just getting a kill signal when the error is tripped, but here is the first one in any case:

Image              PC                Routine            Line        Source             
fms_ACCESS-OM_030  0000000001A89F11  Unknown               Unknown  Unknown
fms_ACCESS-OM_030  0000000001A8804B  Unknown               Unknown  Unknown
fms_ACCESS-OM_030  0000000001A36074  Unknown               Unknown  Unknown
fms_ACCESS-OM_030  0000000001A35E86  Unknown               Unknown  Unknown
fms_ACCESS-OM_030  00000000019B7139  Unknown               Unknown  Unknown
fms_ACCESS-OM_030  00000000019C157C  Unknown               Unknown  Unknown
libpthread-2.12.s  00002B9FBF0C37E0  Unknown               Unknown  Unknown
libmlx4-rdmav2.so  00002B9FCCC06673  Unknown               Unknown  Unknown
mca_btl_openib.so  00002B9FCA6D16A2  Unknown               Unknown  Unknown
mca_btl_openib.so  00002B9FCA6DA3F5  Unknown               Unknown  Unknown
mca_btl_openib.so  00002B9FCA6DA814  Unknown               Unknown  Unknown
mca_btl_openib.so  00002B9FCA6DA8AC  Unknown               Unknown  Unknown
libopen-pal.so.13  00002B9FC308A66C  opal_progress         Unknown  Unknown
libmpi.so.12.0.2   00002B9FBEB50DFC  Unknown               Unknown  Unknown
libmpi.so.12.0.2   00002B9FBEB5132B  ompi_request_defa     Unknown  Unknown
mca_coll_tuned.so  00002B9FCB9AE106  Unknown               Unknown  Unknown
mca_coll_tuned.so  00002B9FCB9AE737  ompi_coll_tuned_b     Unknown  Unknown
mca_coll_tuned.so  00002B9FCB9A1888  ompi_coll_tuned_b     Unknown  Unknown
libmpi.so.12.0.2   00002B9FBEB6B1A8  MPI_Barrier           Unknown  Unknown
libmpi_mpifh.so.1  00002B9FBE8BA2C9  Unknown               Unknown  Unknown
fms_ACCESS-OM_030  0000000001795CFB  mpp_mod_mp_mpp_in         155  mpp_util_mpi.inc
fms_ACCESS-OM_030  0000000001457669  fms_mod_mp_fms_in         336  fms.F90
fms_ACCESS-OM_030  000000000040FFAA  MAIN__                    222  ocean_solo.F90
fms_ACCESS-OM_030  000000000040D91E  Unknown               Unknown  Unknown
libc-2.12.so       00002B9FBF2EFD1D  __libc_start_main     Unknown  Unknown
fms_ACCESS-OM_030  000000000040D829  Unknown               Unknown  Unknown

Set netcdf global attributes to record origin of all published .nc files

At present many output files have the same names and their meaning is distinguishable by path alone - eg there is no way to determine the experiment config used to produce a given ocean.nc file if it is moved from its directory. So the file paths are functioning as file metadata and should probably be recorded within the netcdf files themselves, eg as comments in global attributes. This will become increasingly important as we start publishing data on ua8 - eg if users download a bunch of files and forget where they came from.

So I suggest we have a common set of metadata we put in all output netcdf files, including e.g.

  • some boilerplate - eg "produced by the ACCESS-OM2-01 model as part of the COSIMA project, www.cosima.org"
  • the config hash and a link/doi to the config directory
  • the executable hashes
  • the doi for the dataset
  • the thredds command / url for getting this particular file
  • the directory the file belongs to (below output/)

.... anything else you can think of?
(@paolap - any suggestions?)

I presume there's a way to do this with nco?

Use mushy thermodynamics in CICE to avoid negative salinity

With MOM-SIS 0.1deg, KDS75 & Russ's bathy we found that salinities could go negative, which @Hallberg-NOAA diagnosed as being due to ice formation in regions fresher than ice_bulk_salin=0.005 in SIS:
https://arccss.slack.com/archives/C6WBMAS1K/p1507688360000037

We dealt with this at the time by setting ice_bulk_salin=0.0 as a stop-gap (and also filling in part of Yenesei Gulf) but Bob suggested mushy thermodynamics in CICE would be a good long-term solution.

Can anyone see a reason not to use mushy ice for ACCESS-OM2-01?

I'm not sure what needs to be changed in MOM/CICE/OASIS to make this work.

CICE changes would include setting ktherm = 2 in cice_in.nml; I'm not sure what else.

With variable ice salinity, OASIS would need to couple the salt flux explicitly because MOM can't calculate it from melt. Is this already set up in namcouple? It currently includes

########## 
# Field 15 : salt flux   (no ref no for saltflux yet!)
##########
stflx_io salt_flx 454 400 1 i2o.nc EXPORTED
3600 2700 3600 2700 cict cict LAG=0 SEQ=+1
P 0 P 0

For MOM, presumably some changes are needed mom_oasis3_interface_nml (eg adding salt_flx to fields_in; increasing num_fields_in)? Are any changes needed in in auscom_ice_nml?

Presumably advecting variable salinity around will be extra work for CICE; I don't have a feel for whether that could upset load balancing.

sign of salt flux might be wrong.

It looks like ACCESS-OM2 is losing salt too quickly. On a related matter - although perhaps not a cause - it looks like the sign of salt flux is wrong in the ocean. See attached from @AndyHoggANU.

The LH plot is from ACCESS, the RH plot is from MOM-SIS.

screen shot 2017-07-24 at 2 14 23 pm

Runtime limitations within OASIS-MCT

This issue revealed some limitations with how long OASIS3-MCT can run.

#50

I've created this issue to track any ongoing discussion. For example, @russfiedler said:

@nicjhan I don't get this. I've run for 500 years in the past with MCT without problems(admittedly older version) and so have others. I can't believe that they've overlooked this and there's a 135 year limit. Am I missing something in what you're saying? Using -i8 seems dangerous to say the least.

@nicjhan You can run for longer if you use different units for the date argument to put_prism_proto etc and you are consistent with namcouple.

– date [INTEGER; IN]: number of seconds (or any other time units as long as the same
are used in all components and in the namcouple) at the time of the call (by convention at the
beginning of the timestep)

Our convention has been to reference seconds from the start of each leg so we are limited to 30 odd years per leg. There's no limit on the total run.

MOM and CICE create too many log files

Presently MOM and CICE create a log file for each PE. This is too much and blows out the inode quota on raijin after a short time when we are on high resolution.

Sea-ice stripes: need smoother wind interpolation?

Symptoms

Sea ice in 0.25 and 0.1 runs shows stripes, eg these plots of aice_m (ice area aggregate) in 0.25 deg run /g/data3/hh5/tmp/cosima/access-om2-025/025deg_jra55_ryf_spinup7/output144/ice/OUTPUT/iceh.0145-12.nc
in the Ross and Amundsen Seas:
wind-interp-stripes-ross-sea
aice_vs_j

The line plots are meridional transects vs grid index at various longitudes in the Amundsen Sea. This shows that the stripes have a scale of about 5 grid points. With the Mercator grid the spacing in MOM is about 0.1deg at this latitude so the scale is about 0.5deg. This is roughly the JRA55 resolution (resolution of JRA55 data in /g/data1/ua8/JRA55-do/RYF/v1-1/ depends on field; for zonal velocity it is slightly uneven 0.55~0.562deg in latitude:
ncdump -v latitude /g/data1/ua8/JRA55-do/RYF/v1-1/RYF.u_10.1990_1991.nc
but uniform 0.5625deg in longitude).

Stripes of the same physical scale are also seen in 1/10 runs.

Diagnosis

The ice banding is probably due to wind stress curl having steps (or spikes?) at the JRA55 grid scale due to the wind velocity interpolation method used (conservative piecewise constant?) in the regridding. This would give stripy Ekman transport that could advect the ice into bands or draw cool water up from below in stripes.

@nicjhan - what interpolation is currently being used for wind?

Cure?

Try using a smoother interpolation method for regridding uas_10m and vas_10m. I think at minimum the interpolated velocities should be continuous (eg piecewise (bi-)linear) so the curl is piecewise constant. But smoother may be better - eg 2nd order velocity interpolation to give continuous curl. Perhaps see what @PaulSpence did to fix this in MOM-SIS 1/4deg.

Conservative 2nd order is in development but not yet available. But it is not important to be conservative when interpolating wind velocity, as the ocean is driven by stress which is quadratically related and so will not be conservatively interpolated even if the velocity is.

It is important to retain conservatve interpolation for the other fields, so these should be left as-is (at least until conservative 2nd order becomes available).

CICE injecting a lot of salt into ocean

The fsalt variable in CICE (visible as diagnostic fsalt_ai_m) does not appear to be balanced over time. A lot more salt is entering the ocean than leaving. I'm not yet sure whether this is coming from the NH or SH. One possible explanation is that the model has started with too much ice in the NH and it is melting.

Incorrect initial conditions for most configurations

The input/mom_1deg/ocean_temp_salt.res.nc file is the old Levitus initial conditions.

Now we're using the regridded WOA13 files here:

1b3c43840432aac5a0d8f9b998443c7b  /g/data1/ua8/MOM/initial_conditions/WOA/10/ocean_temp_salt.res.nc

Andy has an earlier version of this file, so though his md5sum is not the same, I have checked and the data inside matches.

I'm assuming all the other initial conditions files need to change also

access-om2 has a memory leak

A access-om2 1 deg continuous run will eventually run out of memory and be killed. I was able to run into Sept of the 5th year before it was killed.

For the time being I'll limit the run length to 2 years.

runoff coupling frequency too high.

The runoff coupling should occur with the same frequency as the dataset. Runoff coupling is expensive because it includes remapping on the atm side.

CICE unnecessarily calls HaloUpdate on coupling fields immediately after receiving them

Each CICE PE does a halo update on each coupling field between atm and ocean immediately after receiving these from OASIS. These are expensive, e.g. for a 20 day 0.1 deg run ice_diag.d says:

Timer 23: ocn_halos 1772.89 seconds
Timer stats (node): min = 0.33 seconds
max = 1772.89 seconds
mean= 420.71 seconds

Timer 24: atm_halos 21.78 seconds
Timer stats (node): min = 0.01 seconds
max = 21.78 seconds
mean= 6.68 seconds

These are absolute values from a run with runtime 8444.737558 s. The reason that the ocean coupling field halos are using so much more time than the atmosphere if because the ice and ocean couple more frequently.

There is no obvious reason why halo updates should be necessary if OASIS is configured properly. The halos should be filled in by the coupler, not with a separate call afterwards.

A fix for this issue will figure our how OASIS can fill the halos of the coupling fields and remove all of these calls to HaloUpdate in the CICE driver code. This will improve the code and should have a noticeable effect on 0.1 deg performance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.