Giter Site home page Giter Site logo

ecco-group / ecco-v4-configurations Goto Github PK

View Code? Open in Web Editor NEW
16.0 16.0 18.0 61.6 MB

This repository contains documentation (doc/) and model configuration files (code/, namelist/) for official releases of the ECCO version 4 ocean and sea-ice state estimates. Model configuration files allow users to reproduce the state estimate or conduct new simulation experiments.

C 13.50% Fortran 85.15% Shell 0.78% Python 0.43% Makefile 0.07% Roff 0.05% Dockerfile 0.02%

ecco-v4-configurations's People

Contributors

duncanbark avatar gaelforget avatar gjmoore avatar ifenty avatar mmazloff avatar owang01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ecco-v4-configurations's Issues

new I/O bottleneck as a result of reformatting

On a different channel, @hongandyan noted that

But it took ~204 seconds to load 312 monthly v4 ETAN from 312 files as opposed to 1.4 seconds to load 288 monthly v3 ETAN from 13 files. Perhaps this arrangement has more benefits to ECCO-v4.py

I have not tried but this looks like a major set back and inconvenience to users!

I see this is as a separate issue from #40 but it's not unrelated cause it stems from the same reformatting that is looking more and more like its creating major problems.

The only simple solution now might be that the ECCO team at JPL & UT just adds another folder with v4r4 etc in the original nctiles formatting and file layout used in earlier releases. And then add guidelines in the READMEs to let users know which version might be best depending on whether they use ECCO-v4.py, gcmfaces, or other known software.

Linking @ifenty, @owang01, and @timothyas here as they seem likely know who is responsible for ECCO files at JPL & UT under the relevant NASA grant (not sure I even have a copy...)

Sea level pressure

R4 is forced with high frequency sea level pressure fields,right? We should add this pressure field as part of the product. Probably as a an EXF field.

@owang01

Request for a specific configuration of ECCOv4

Hi all,

I hope that this is an acceptable venue for this request. If not, let me know and I'll move it elsewhere.

Is there an ECCOv4 configuration/solution that avoids using the bulk formulae? That is, a solution that uses net heat flux as a control variable?

For context: I would like to attempt an adjoint reconstruction using net heat flux and wind stress. However, if we use the bulk formulae, there is a risk of double-counting (as pointed out by Yavor Kostov). Gael Forget suggested that avoiding the bulk formulae altogether could be a promising way to start.

Thanks for any help/guidance you can provide.

Nonphysical downward shortwave radiation (EXFswdn) after optimization

Downward shortwave radiation (EXFswdn) can be negative after optimization. This is problematic for driving ocean biology, causing negative photosynthetically available radiation (PAR), which leads to dead zones. Conversely, night-time radiation can be non-zero after adjustment. Some suggested fixes that we can consider include:

  1. setting to zero negative values of EXFswdn,
  2. setting to zero when prescribed atmospheric reanalysis EXFswdn is zero,
  3. adding problematic corrections to longwave radiation instead (EXFlwdn), and
  4. turning off shortwave adjustments altogether.

regeneration of netcdf

Feedback from PO.DAAC

https://podaac.jpl.nasa.gov/PO.DAAC_DataManagementPractices

  1. Add time_coverage_resolution to native grid fields, P1M (monthly) or P1D (daily).

  2. Fix time_coverage_resolution on the interpolated files (remove extra "" fields)

  3. Remove all of our custom standard names : http://cfconventions.org/Data/cf-standard-names/67/build/cf-standard-name-table.html

  4. Add comments defining what all the variables are. Some are obvious, but not all of them, especially the ancillary file. Add this information to our "variable" JSON files.

  5. Remove non-essential grid information from the native grid files.

  6. Add metadata comment to every netcdf file telling the users that the grid information is in the netcdf grid file.

  7. UUID to each file

  8. Valid range is an ARRAY

  9. Time needs standard NAME

  10. What is SWAP dimenion?

  11. Use valid range (min, max)

  12. SSH use Sea surface height above geoid

  13. Use comments instead of 'LONG NAME'

  14. Find out what 'grid mapping' means

  15. use same time format in time coverage start, time coverage end and other places. ** whether to have 'T' included or not.

  16. geospatial vertical units 'm'

  17. change canonical units from (kg/m^3/)m to format: kg m-3 m-1, and degree_C

  18. use cell bounds

SIaaflux needs a better description

Currently:

      "name": "SIaaflux",
      "long_name": "Conservative ocean and sea-ice advective heat flux adjustment",
      "units": "W m-2",
      "comments_1": "Heat flux associated with the temperature difference between sea surface temperature and sea-ice (assume 0 degree C in the model)",
      "comments_2": "Note: heat flux needed to melt/freeze sea-ice at 0 degC to sea water at the ocean surface (at sea surface temperature), excluding the latent heat of fusion.",
      "direction": ">0 decrease potential temperature (THETA)",

Need to add R1, R2, and R3 directories

Currently our ECCO-GROUP repository only includes namelist and code directories for release 2. I think we should consolidate the ECCOv4 release namelist and code directories in one place and label properly.

** from a suggestion by a student at the ECCO 2019 summer school **

GGL mixing parameter change to c_k 0.07

Concerning the issue of GGL mixing parameter described in this issue: MITgcm/MITgcm#169

We should consider changing the value of c_k to 0.07 to reduce the GGL mixing efficiency to 0.2 from the default 0.285714. This issue was raised by P. Cessi at the ECCO meeting.


According to Gregg et al., 2018 \gamma should be 0.2.

The current default value of c_k is 0.1. A value of c_k of 0.1 implies a value of \gamma of 0.285714.

For \gamma=0.2 the value of should be c_k=0.07 because:

c_k = 0.5 * gamma * c_ep * P_rt

x = 0.5 * 0.2 * 0.7 * 1
x = 0.07

My vote is to change the default value of c_k to 0.07 because if we only change this in the ECCO setup then this knowledge of the 'proper' \gamma will be lost. The question of whether to carry the old value of c_k in the verification experiments I leave to @jm-c and others. I personally don't like carrying 'bad' parameter values around because when someone new tries to adapat a verification experiment for a new domain they will probably tend to carry these old parameter values around out of ignorance.

Penalize isopycnal depths and T and S on isopycnals?

Suggestion from J. Toole: perhaps we should write cost functions to constrain 1) isopycnal depth and 2) the salinity - or temperature - on isopycnals?

"The variance of Theta/S - below the surface waters - is far less than T or S on pressure surfaces, which should translate into an up-weighting in the cost function."

It true, much of our large in-situ variance is associated with isopycnal displacements in the presence of vertical T and S gradients.

use new conserving Angular-Momentum scheme.

according to this MITgcm issue:

MITgcm/MITgcm#212

a new scheme: selectCoriScheme=2 seems to conserve angular momemtum

    add 2 new Coriolis schemes for vector-invariant formulation.
    add new (run-time) coriolis-scheme selector "selectCoriScheme"
    to replace (in src code) "useJamartWetPoints" and "useEnergyConservingCoriolis";

ECCO v4 should consider implementing this scheme

Adjoint runs fail with salt/tempVertAdvScheme = 33

Adjoint runs fail during the forward portion of the run (i.e. during calculation of the cost function) with CALC_R_STAR errors when I tried a run with salt/tempAdvScheme=33 and didn't specify the VertAdvScheme (set to 3 in ECCO setup). Not sure if this is expected behaviour. Cost function is a simple global temperature sum.

Background: I've spun up a version of the LLC grid forced by CORE2 normal year fields (setup verifications/global_oce_llc90/) which has various differences in the data namelists when compared to the ECCO standard setup. Adjoint runs fail with current options. By comparison with the ECCO parameters and several test runs I determined that it is the differing vertical advection schemes causing the failure. Specifying vertAdvScheme=3 leads to a successful run.

missing v4r5 docs and links

User asked me recently where they could download the forcing for v4r5 from. I cannot find this information here. Do we have this on a public facing server somewhere?

Other things seem missing in the repo :

  • README for directions to rerun v4r5
  • a link to our synopsis doc for v4r5

Compilation of ECCOv4r4 flux-forced adjoint fails with "failed to convert" error

I'm attempting to compile the r4 flux-forced version on the ARCHER2 machine in the UK. I can successfully compile the 'regular' r4 version. I have an up-to-date version of the code and using the following command:

export MITGCM_ROOTDIR=/home/n01/n01/emmomp/MITgcm
export PATH=$MITGCM_ROOTDIR/tools:$PATH
export MITGCM_OPT=$MITGCM_ROOTDIR/tools/build_options/dev_linux_amd64_cray_archer2

module load cray-hdf5-parallel
module load cray-netcdf-hdf5parallel

genmake2 -ieee -mpi -mods=../code_ff_ad -of=$MITGCM_OPT
make depend
make adtaf
make adall`

Here's my Makefile for reference.
The flux forced build is failing at the TAF stage with the following error:

ftn -o mitgcmuv_ad -em -ef -dynamic -h pic -O0 -hfp0 ad_taf_output.o addummy_in_stepping.o diagnostics_fill_state.o diagnostics_main_init.o do_the_model_io.o ecco_check.o ecco_cost_init_barfiles.o ecco_readparms.o exf_check.o exf_init_fixed.o exf_monitor.o exf_readparms.o exf_summary.o mdsio_write_meta.o pabar_output.o profiles_init_fixed.o stergloh_output.o active_file_ad.o active_file_control.o active_file_control_slice.o active_file.o active_file_gen_ad.o active_file_gen.o active_file_gen_g.o active_file_g.o active_file_loc_ad.o active_file_loc.o active_file_loc_g.o adautodiff_whtapeio_sync.o addamp_adj.o addummy_in_dynamics.o adopen_adclose.o adread_adwrite.o adzero_adj.o autodiff_check.o autodiff_findunit.o autodiff_inadmode_set_ad.o autodiff_inadmode_set.o autodiff_inadmode_set_g.o autodiff_inadmode_unset_ad.o autodiff_inadmode_unset.o autodiff_inadmode_unset_g.o autodiff_ini_model_io.o autodiff_readparms.o autodiff_whtapeio_sync.o copy_ad_uv_outp.o copy_advar_outp.o damp_adj.o dummy_in_dynamics.o dummy_in_stepping.o g_dummy_in_dynamics.o g_dummy_in_stepping.o global_max_ad.o global_sum_ad.o global_sum_tile_ad.o myactivefunction_ad.o myactivefunction.o zero_adj.o cal_daysformonth.o cal_dayspermonth.o cal_init_fixed.o cal_monthsforyear.o cal_monthsperyear.o cal_readparms.o cal_set.o cal_stepsforday.o cal_stepsperday.o cal_summary.o cal_weekday.o cost_check.o cost_dependent_init.o cost_depth.o cost_final_restore.o cost_final_store.o cost_readparms.o adctrl_bound.o ctrl_bound.o ctrl_check.o ctrl_hfacc_ini.o ctrl_init_ctrlvar.o ctrl_init.o ctrl_init_wet.o ctrl_mask_set_xz.o ctrl_mask_set_yz.o ctrl_pack.o ctrl_readparms.o ctrl_set_fname.o ctrl_set_globfld_xy.o ctrl_set_globfld_xyz.o ctrl_set_globfld_xz.o ctrl_set_globfld_yz.o ctrl_set_pack_xy.o ctrl_set_pack_xyz.o ctrl_set_pack_xz.o ctrl_set_pack_yz.o ctrl_set_unpack_xy.o ctrl_set_unpack_xyz.o ctrl_set_unpack_xz.o ctrl_set_unpack_yz.o ctrl_summary.o ctrl_unpack.o optim_readparms.o chksum_tiled.o debug_call.o debug_cs_corner_uv.o debug_enter.o debug_fld_stats_rl.o debug_fld_stats_rs.o debug_leave.o debug_msg.o debug_stats_rl.o debug_stats_rs.o fill_in_corners_rl.o write_fullarray_rl.o write_fullarray_rs.o diag_calc_psivel.o diag_cg2d.o diagnostics_addtolist.o diagnostics_calc_phivel.o diagnostics_check.o diagnostics_clear.o diagnostics_fill.o diagnostics_fill_field.o diagnostics_fill_rs.o diagnostics_fract_fill.o diagnostics_ini_io.o diagnostics_init_early.o diagnostics_init_fixed.o diagnostics_init_varia.o diagnostics_interp_p2p.o diagnostics_interp_vert.o diagnostics_list_check.o diagnostics_mnc_out.o diagnostics_out.o diagnostics_readparms.o diagnostics_read_pickup.o diagnostics_scale_fill.o diagnostics_scale_fill_rs.o diagnostics_set_calc.o diagnostics_setdiag.o diagnostics_set_levels.o diagnostics_set_pointers.o diagnostics_status_error.o diagnostics_sum_levels.o diagnostics_summary.o diagnostics_switch_onoff.o diagnostics_utils.o diagnostics_write.o diagnostics_write_pickup.o diagstats_ascii_out.o diagstats_calc.o diagstats_clear.o diagstats_close_io.o diagstats_fill.o diagstats_global.o diagstats_ini_io.o diagstats_local.o diagstats_mnc_out.o diagstats_others_calc.o diagstats_output.o diagstats_setdiag.o diagstats_set_pointers.o diagstats_set_regions.o diag_vegtile_fill.o ecco_cost_init_fixed.o ecco_cost_summary.o ecco_cost_weights.o ecco_summary.o exch2_3d_r4.o exch2_3d_r8.o exch2_ad_get_r41.o exch2_ad_get_r42.o exch2_ad_get_r81.o exch2_ad_get_r82.o exch2_ad_get_rl1.o exch2_ad_get_rl2.o exch2_ad_get_rs1.o exch2_ad_get_rs2.o exch2_ad_put_r41.o exch2_ad_put_r42.o exch2_ad_put_r81.o exch2_ad_put_r82.o exch2_ad_put_rl1.o exch2_ad_put_rl2.o exch2_ad_put_rs1.o exch2_ad_put_rs2.o exch2_check_depths.o exch2_get_r41.o exch2_get_r42.o exch2_get_r81.o exch2_get_r82.o exch2_get_rl1.o exch2_get_rl2.o exch2_get_rs1.o exch2_get_rs2.o exch2_get_scal_bounds.o exch2_get_uv_bounds.o exch2_put_r41.o exch2_put_r42.o exch2_put_r81.o exch2_put_r82.o exch2_put_rl1.o exch2_put_rl2.o exch2_put_rs1.o exch2_put_rs2.o exch2_r41_cube_ad.o exch2_r41_cube.o exch2_r42_cube_ad.o exch2_r42_cube.o exch2_r81_cube_ad.o exch2_r81_cube.o exch2_r82_cube_ad.o exch2_r82_cube.o exch2_recv_r41.o exch2_recv_r42.o exch2_recv_r81.o exch2_recv_r82.o exch2_recv_rl1.o exch2_recv_rl2.o exch2_recv_rs1.o exch2_recv_rs2.o exch2_rl1_cube_ad.o exch2_rl1_cube.o exch2_rl2_cube_ad.o exch2_rl2_cube.o exch2_rs1_cube_ad.o exch2_rs1_cube.o exch2_rs2_cube_ad.o exch2_rs2_cube.o exch2_s3d_r4.o exch2_s3d_r8.o exch2_s3d_rl.o exch2_s3d_rs.o exch2_send_r41.o exch2_send_r42.o exch2_send_r81.o exch2_send_r82.o exch2_send_rl1.o exch2_send_rl2.o exch2_send_rs1.o exch2_send_rs2.o exch2_sm_3d_r4.o exch2_sm_3d_r8.o exch2_sm_3d_rs.o exch2_uv_3d_r4.o exch2_uv_3d_r8.o exch2_uv_agrid_3d_r4.o exch2_uv_agrid_3d_r8.o exch2_uv_bgrid_3d_r4.o exch2_uv_bgrid_3d_r8.o exch2_uv_bgrid_3d_rl.o exch2_uv_bgrid_3d_rs.o exch2_uv_cgrid_3d_r4.o exch2_uv_cgrid_3d_r8.o exch2_uv_cgrid_3d_rl.o exch2_uv_cgrid_3d_rs.o exch2_uv_dgrid_3d_r4.o exch2_uv_dgrid_3d_r8.o exch2_z_3d_r4.o exch2_z_3d_r8.o exch2_z_3d_rl.o exch2_z_3d_rs.o w2_cumulsum_z_tile.o w2_e2setup.o w2_eeboot.o w2_map_procs.o w2_print_comm_sequence.o w2_print_e2setup.o w2_readparms.o w2_set_cs6_facets.o w2_set_f2f_index.o w2_set_gen_facets.o w2_set_map_cumsum.o w2_set_map_tiles.o w2_set_myown_facets.o w2_set_single_facet.o w2_set_tile2tiles.o exf_ad_dump.o exf_adjoint_snapshots_ad.o exf_adjoint_snapshots.o exf_adjoint_snapshots__g.o exf_check_range.o exf_diagnostics_fill.o exf_diagnostics_init.o exf_getffield_start.o exf_getfield_start.o exf_interp_read.o exf_monitor_ad.o exf_zenithangle_table.o gad_advscheme.o gad_check.o gad_diagnostics_init.o gad_diagnostics_state.o gad_init_fixed.o gad_osc_hat_r.o gad_osc_hat_x.o gad_osc_hat_y.o gad_osc_mul_r.o gad_osc_mul_x.o gad_osc_mul_y.o gad_plm_fun.o gad_ppm_adv_r.o gad_ppm_adv_x.o gad_ppm_adv_y.o gad_ppm_flx_r.o gad_ppm_flx_x.o gad_ppm_flx_y.o gad_ppm_fun.o gad_ppm_hat_r.o gad_ppm_hat_x.o gad_ppm_hat_y.o gad_ppm_p3e_r.o gad_ppm_p3e_x.o gad_ppm_p3e_y.o gad_pqm_adv_r.o gad_pqm_adv_x.o gad_pqm_adv_y.o gad_pqm_flx_r.o gad_pqm_flx_x.o gad_pqm_flx_y.o gad_pqm_fun.o gad_pqm_hat_r.o gad_pqm_hat_x.o gad_pqm_hat_y.o gad_pqm_p5e_r.o gad_pqm_p5e_x.o gad_pqm_p5e_y.o gad_write_pickup.o salt_fill.o ggl90_check.o ggl90_diagnostics_init.o ggl90_idemix.o ggl90_output.o ggl90_readparms.o gmredi_calc_eigs.o gmredi_calc_psi_bvp.o gmredi_calc_urms.o gmredi_check.o gmredi_diagnostics_fill.o gmredi_diagnostics_impl.o gmredi_diagnostics_init.o gmredi_init_fixed.o gmredi_k3d.o gmredi_mnc_init.o gmredi_output.o gmredi_readparms.o gmredi_read_pickup.o gmredi_write_pickup.o submeso_calc_psi.o grdchk_check.o grdchk_getadxx.o grdchk_get_obcs_mask.o grdchk_get_position.o grdchk_getxx.o grdchk_init.o grdchk_loc.o grdchk_main.o grdchk_print.o grdchk_readparms.o grdchk_setxx.o grdchk_summary.o mdsio_buffertorl.o mdsio_buffertors.o mdsio_check4file.o mdsio_facef_read.o mdsio_gl.o mdsio_gl_slice.o mdsio_pass_r4torl.o mdsio_pass_r4tors.o mdsio_pass_r8torl.o mdsio_pass_r8tors.o mdsio_rd_rec_rl.o mdsio_rd_rec_rs.o mdsio_read_field.o mdsio_read_meta.o mdsio_read_section.o mdsio_read_tape.o mdsio_readvec_loc.o mdsio_read_whalos.o mdsio_rw_field.o mdsio_rw_slice.o mdsio_seg4torl.o mdsio_seg4tors.o mdsio_seg8torl.o mdsio_seg8tors.o mdsio_segxtorx_2d.o mdsio_write_field.o mdsio_writelocal.o mdsio_write_section.o mdsio_write_tape.o mdsio_writevec_loc.o mdsio_write_whalos.o mdsio_wr_metafiles.o mdsio_wr_rec_rl.o mdsio_wr_rec_rs.o mon_advcfl.o mon_advcflw2.o mon_advcflw.o mon_calc_advcfl.o mon_calc_stats_rl.o mon_calc_stats_rs.o mon_init.o monitor_ad.o monitor.o monitor_g.o mon_ke.o mon_out.o mon_printstats_rl.o mon_printstats_rs.o mon_set_iounit.o mon_set_pref.o mon_solution.o mon_stats_latbnd_rl.o mon_stats_rl.o mon_stats_rs.o mon_surfcor.o mon_vort3.o mon_writestats_rl.o mon_writestats_rs.o active_file_control_profiles.o active_file_profiles_ad.o active_file_profiles.o active_file_profiles_g.o profiles_findunit.o profiles_ini_io.o profiles_init_ncfile.o profiles_readparms.o profiles_readvector.o get_write_global_fld.o read_glvec_rl.o read_glvec_rs.o read_mflds.o rw_get_suffix.o set_write_global_fld.o write_fld_3d_rl.o write_fld_3d_rs.o write_fld_s3d_rl.o write_fld_s3d_rs.o write_fld_xy_rl.o write_fld_xy_rs.o write_fld_xyz_rl.o write_fld_xyz_rs.o write_glvec_rl.o write_glvec_rs.o write_local_rl.o write_local_rs.o write_rec.o salt_plume_check.o salt_plume_diagnostics_init.o salt_plume_init_fixed.o salt_plume_mnc_init.o salt_plume_readparms.o sbo_calc.o sbo_check.o sbo_output.o sbo_readparms.o sbo_rho.o seaice_ad_dump.o seaice_calc_lhs.o seaice_calc_residual.o seaice_calc_rhs.o seaice_check.o seaice_cost_init_fixed.o seaice_cost_weights.o seaice_diagnostics_init.o seaice_do_ridging.o seaice_fgmres.o seaice_init_fixed.o seaice_itd_pickup.o seaice_itd_redist.o seaice_itd_remap.o seaice_itd_sum.o seaice_jacvec.o seaice_jfnk.o seaice_krylov.o seaice_mnc_init.o seaice_monitor_ad.o seaice_monitor.o seaice_obcs_output.o seaice_preconditioner.o seaice_prepare_ridging.o seaice_readparms.o seaice_summary.o seaice_turnoff_io.o smooth_filtervar2d.o smooth_filtervar3d.o smooth_init2d.o smooth_init3d.o smooth_init_fixed.o smooth_readparms.o timeave_init_fixed.o mom_calc_3d_strain.o mom_calc_smag_3d.o mom_diagnostics_init.o mom_init_fixed.o mom_u_botdrag_impl.o mom_u_implicit_r.o mom_uv_smag_3d.o mom_v_botdrag_impl.o mom_v_implicit_r.o mom_w_coriolis_nh.o mom_w_metric_nh.o mom_w_sidedrag.o mom_w_smag_3d.o all_proc_die.o bar2.o bar_check.o barrier.o check_threads.o comm_stats.o cumulsum_z_tile.o diff_phase_multiple.o eeboot.o eeboot_minimal.o eedata_example.o eedie.o eeintro_msg.o eeset_parms.o eewrite_eeenv.o exch0_r4.o exch0_r8.o exch0_rl.o exch0_rs.o exch1_bg_r4_cube.o exch1_bg_r8_cube.o exch1_bg_rl_cube.o exch1_bg_rs_cube.o exch1_r4_cube.o exch1_r4.o exch1_r8_cube.o exch1_r8.o exch1_rl_ad.o exch1_rl_cube_ad.o exch1_rl_cube.o exch1_rl.o exch1_rs_ad.o exch1_rs_cube_ad.o exch1_rs_cube.o exch1_rs.o exch1_uv_r4_cube.o exch1_uv_r8_cube.o exch1_uv_rl_cube.o exch1_uv_rs_cube.o exch1_z_r4_cube.o exch1_z_r8_cube.o exch1_z_rl_cube.o exch1_z_rs_cube.o exch_3d_r4.o exch_3d_r8.o exch_cycle_ebl.o exch_init.o exch_r4_recv_get_x.o exch_r4_recv_get_y.o exch_r4_send_put_x.o exch_r4_send_put_y.o exch_r8_recv_get_x.o exch_r8_recv_get_y.o exch_r8_send_put_x.o exch_r8_send_put_y.o exch_rl_recv_get_x.o exch_rl_recv_get_y.o exch_rl_send_put_x.o exch_rl_send_put_y.o exch_rs_recv_get_x.o exch_rs_recv_get_y.o exch_rs_send_put_x.o exch_rs_send_put_y.o exch_s3d_r4.o exch_s3d_r8.o exch_s3d_rl.o exch_s3d_rs.o exch_sm_3d_r4.o exch_sm_3d_r8.o exch_sm_3d_rs.o exch_uv_3d_r4.o exch_uv_3d_r8.o exch_uv_agrid_3d_r4.o exch_uv_agrid_3d_r8.o exch_uv_bgrid_3d_r4.o exch_uv_bgrid_3d_r8.o exch_uv_bgrid_3d_rl.o exch_uv_bgrid_3d_rs.o exch_uv_dgrid_3d_r4.o exch_uv_dgrid_3d_r8.o exch_uv_xy_r4.o exch_uv_xy_r8.o exch_uv_xyz_r4.o exch_uv_xyz_r8.o exch_xy_r4.o exch_xy_r8.o exch_xyz_r4.o exch_xyz_r8.o exch_z_3d_r4.o exch_z_3d_r8.o exch_z_3d_rl.o exch_z_3d_rs.o fool_the_compiler.o gather_2d_r4.o gather_2d_r8.o gather_2d_wh_r4.o gather_2d_wh_r8.o gather_vec_r4.o gather_vec_r8.o gather_xz.o gather_yz.o global_max.o global_sum.o global_sum_singlecpu.o global_sum_tile.o global_vec_sum.o gsum.o ini_communication_patterns.o ini_procs.o ini_threading_environment.o main.o master_cpu_io.o master_cpu_thread.o mds_byteswapi4.o mds_byteswapr4.o mds_byteswapr8.o mdsfindunit.o mds_flush.o mds_reclen.o memsync.o nml_change_syntax.o nml_set_terminator.o open_copy_data_file.o print.o reset_halo.o scatter_2d_r4.o scatter_2d_r8.o scatter_2d_wh_r4.o scatter_2d_wh_r8.o scatter_vec_r4.o scatter_vec_r8.o scatter_xz.o scatter_yz.o stop_if_error.o timers.o utils.o write_utils.o add_walls2masks.o calc_eddy_stress.o calc_grid_angles.o calc_gw.o calc_oce_mxlayer.o cg2d_ex0.o cg2d.o cg2d_sr.o cg3d_ex0.o cg3d.o check_pickup.o config_check.o config_summary.o diags_oceanic_surf_flux.o do_statevars_diags.o do_statevars_tave.o do_write_pickup.o external_forcing.o find_hyd_press_1d.o gsw_teos10.o ini_cartesian_grid.o ini_cg3d.o ini_cori.o ini_curvilinear_grid.o ini_cylinder_grid.o ini_eos.o ini_global_domain.o ini_grid.o ini_linear_phisurf.o ini_local_grid.o ini_masks_etc.o ini_mnc_vars.o ini_model_io.o ini_nh_vars.o ini_parms.o ini_sigma_hfac.o ini_spherical_polar_grid.o initialise_fixed.o ini_vertical_grid.o load_ref_files.o packages_boot.o packages_check.o packages_error_msg.o packages_init_fixed.o packages_print_msg.o packages_readparms.o packages_unused_msg.o packages_write_pickup.o plot_field.o port_rand.o post_cg3d.o pre_cg3d.o rotate_spherical_polar_grid.o set_defaults.o set_grid_factors.o set_parms.o set_ref_state.o solve_uv_tridiago.o the_model_main.o timestep_wvel.o tracers_iigw_correction.o turnoff_model_io.o write_grid.o write_pickup.o write_state.o gsl_ieee_env.o ptwrapper.o setdir.o setrlstk.o sigreg.o tim.o timer_stats.o -L/opt/cray/pe/netcdf-hdf5parallel/4.7.4.3/crayclang/9.1/lib -L/opt/cray/pe/mpich/8.1.4/ofi/crayclang/9.1/include/lib
/opt/cray/pe/cce/11.0.4/binutils/x86_64/x86_64-pc-linux-gnu/bin/ld: failed to convert GOTPCREL relocation; relink with --no-relax
make[1]: *** [Makefile:2773: mitgcmuv_ad] Error 1
make[1]: Leaving directory '/home2/home/n01/n01/emmomp/MITgcm/ECCO-v4-Configurations/ECCOv4 Release 4/build_ff_ad'
make: *** [Makefile:2750: ad_exe_target] Error 2

add SSH and OBP as diagnostics

@owang01

Currently SSH and OBP are calculated offline to implement the Greatbatch correction (Greatbach 1994) and the sea ice load correction. I think we should add SSH and OBP as diagnostics to save users from having to do these correction themselves. The reason is that these time-varying Greatbatch corrections are not currently output as a separate file -- they are printed in STDOUT. A user therefore has to know how to parse out the correction term from STDOUT which, AFAIK, is not documented.

References:
Greatbatch 1994., A note on the representation of steric sea level in models that conserve volume rather than mass. https://doi.org/10.1029/94JC00847

Mixing length diagnostic for GGL

Check to see if there is a mixed layer depth diagnostic for GGL corresponding with l_d(k=0).

From issue: #169 on MITgcm:

Based on the above, it seems that l_u and l_d vary as a function of depth. In the interior when stratification is constant, l_u = l_d. At the surface, l_u is presumably zero and l_d should be the distance from the surface to the base of the mixed layer.

I don't know how l_d and l_u are stored in the model but it seems that we would want to record l_d(k=0) for surface mixing depth.

sea ice comments by ML

On 3. Dec 2019, at 23:54, Fenty, Ian G (US 329C) <[email protected]> wrote:
Martin,
I tried increasing SEAICEnonLinIterMax to 10 per your suggestion but seaice_check throws an error:
Need to increase MPSEUDOTIMESTEPS IN SEAICE_PARAMS.h
Is MPSEUDOTIMESTEPS some kind of upper bound on SEAICEnonLinIterMax?
Ian


Hi Ian,

didn’t think of that. For the adjoint (and only for the adjoint), one needs to hard-code a upper limit for the number of nonlinear steps (formerly called NPSEUDOTIMESTEPS) because it needs to be know when tapes are defined (in the_main_loop.F). That’s the parameter MPSEUDOTIMESTEPS. Just set it to 10. This will increase the memory footprint, but not too much as long as SEAICE_LSR_ADJOINT_ITER is undefined (which you want to do because you don’t even want to use the adjoint of the dynamics).

There are a couple of these upper bounds because of the adjoint (see SEAICE_SIZE.h)

Martin

# SEAICE parameters
# taking /MITgcm_contrib/ecco_utils/ecco_v4_release3_devel
 &SEAICE_PARM01

      SEAICEpresH0=2.,
      SEAICEpresPow0=1,
      SEAICEpresPow1=1,

      SEAICE_strength = 2.25e4,
# I would try to get rid of this in favor of SEAICE_multDim > 1
      SEAICE_area_max = 0.97,      

      SEAICE_no_slip     = .TRUE.,

# this is already the default (with recent code)
      SEAICE_drag=0.001,
      OCEAN_drag=0.001,

#####
      SEAICEuseTILT=.FALSE.,
# I suspect that this makes it necessary to have
# this weird SEAICE_area_max = 0.97
      SEAICE_multDim=1,
      SEAICErestoreUnderIce=.TRUE.,

      SEAICE_salt0=4.,

# I recommend 1.e-5 (only slightly more expensive)
      LSR_ERROR          = 2.e-4,
# I recomment (makes the dynamics solver 5 times more expensive than default 2)
# SEAICEnonLinIterMax=10
# this is already the default:
     SEAICEuseDYNAMICS  = .TRUE.,
#default = -50.
      MIN_ATEMP          = -40.,
#default = -50.
      MIN_TICE           = -40.,
      SEAICEadvScheme    = 30,
# this is already the default:
      SEAICEuseFluxForm = .TRUE.,
# this is already the default:
      SEAICEadvSnow      = .TRUE.,
# this is truely disturbing. Why do you do/need that?
      SEAICEdiffKhHeff   = 400.,
      SEAICEdiffKhArea   = 400.,
      SEAICEdiffKhSnow   = 400.,
# this is already the default:
      SEAICEuseFlooding  = .TRUE.,
# I don't know what these 5 parameter do exactly
      SEAICE_mcPheePiston= 3.858024691358025E-05,
      SEAICE_frazilFrac  = 1.,
      SEAICE_mcPheeTaper = 0.,
      SEAICE_areaLossFormula=2,
      SEAICEheatConsFix  = .TRUE.,
# why not use the default with variable freezing point?
      SEAICE_tempFrz0    = -1.96,
      SEAICE_dTempFrz_dS = 0.,
# this is already the default
      SEAICEuseMetricTerms = .TRUE.,
# why do you need that?
      SEAICE_clipVelocities = .TRUE.,
# there are new defaults:
# will lead to zero velocities where there is no ice (no need for clipping
# velocities anymore)
# SEAICEscaleSurfStress = .TRUE.,
# not important just more consistent
# SEAICEaddSnowMass = .TRUE.,
# this makes more sense than 30, but may not work with adjoint. If you
# don't use the adjoint, then I would use this default or 33
# SEAICEadvScheme = 77,
# only makes sense with SEAICE_multDim > 1, but then it makes a lot of sense
# SEAICE_useMultDimSnow = .TRUE.
# the following are for better stability of the solver
# SEAICE_OLx/y = OLx/y - 2
# SEAICEetaZmethod = 3
#end of new defaults

#take 33% out of (1-albedo)
      SEAICE_dryIceAlb   = 0.84,
      SEAICE_wetIceAlb   = 0.78,
      SEAICE_drySnowAlb  = 0.90,
      SEAICE_wetSnowAlb  = 0.8 ,
#default albedos
      SEAICE_dryIceAlb_south   = 0.75
      SEAICE_wetIceAlb_south   = 0.66
      SEAICE_drySnowAlb_south  = 0.84
      SEAICE_wetSnowAlb_south  = 0.7 
 /
#
 &SEAICE_PARM02
 /

CPP -flags:

C $Header: /home/ubuntu/mnt/e9_copy/MITgcm_contrib/ecco_utils/ecco_v4_release3_devel/code/SEAICE_OPTIONS.h,v 1.1 2017/05/04 17:46:37 ou.wang Exp $
C $Name:  $

C     *==========================================================*
C     | SEAICE_OPTIONS.h
C     | o CPP options file for sea ice package.
C     *==========================================================*
C     | Use this file for selecting options within the sea ice
C     | package.
C     *==========================================================*

#ifndef SEAICE_OPTIONS_H
#define SEAICE_OPTIONS_H
#include "PACKAGES_CONFIG.h"
#include "CPP_OPTIONS.h"

#ifdef ALLOW_SEAICE
C     Package-specific Options & Macros go here

C--   Write "text-plots" of certain fields in STDOUT for debugging.
#undef SEAICE_DEBUG

C--   Allow sea-ice dynamic code.
C     This option is provided to allow use of TAMC
C     on the thermodynamics component of the code only.
C     Sea-ice dynamics can also be turned off at runtime
C     using variable SEAICEuseDYNAMICS.
#define SEAICE_ALLOW_DYNAMICS

C--   By default, the sea-ice package uses its own integrated bulk
C     formulae to compute fluxes (fu, fv, EmPmR, Qnet, and Qsw) over
C     open-ocean.  When this flag is set, these variables are computed
C     in a separate external package, for example, pkg/exf, and then
C     modified for sea-ice effects by pkg/seaice.
#define SEAICE_EXTERNAL_FLUXES

C--   This CPP flag has been retired.  The number of ice categories
C     used to solve for seaice flux is now specified by run-time
C     parameter SEAICE_multDim.
C     Note: be aware of pickup_seaice.* compatibility issues when
C     restarting a simulation with a different number of categories.
c#define SEAICE_MULTICATEGORY

C--   run with sea Ice Thickness Distribution (ITD);
C     set number of categories (nITD) in SEAICE_SIZE.h
#undef SEAICE_ITD

C--   Since the missing sublimation term is now included
C     this flag is needed for backward compatibility
#undef SEAICE_DISABLE_SUBLIM

C--   Suspected missing term in coupled ocn-ice heat budget (to be confirmed)
#undef SEAICE_DISABLE_HEATCONSFIX

C--   Default is constant seaice salinity (SEAICE_salt0); Define the following
C     flag to consider (space & time) variable salinity: advected and forming
C     seaice with a fraction (=SEAICE_saltFrac) of freezing seawater salinity.
C- Note: SItracer also offers an alternative way to handle variable salinity.
#undef SEAICE_VARIABLE_SALINITY

C--   Tracers of ice and/or ice cover.
#undef ALLOW_SITRACER
#ifdef ALLOW_SITRACER
C--   To try avoid 'spontaneous generation' of tracer maxima by advdiff.
# define ALLOW_SITRACER_ADVCAP
#endif

C--   Enable grease ice parameterization
C     The grease ice parameterization delays formation of solid
C     sea ice from frazil ice by a time constant and provides a
C     dynamic calculation of the initial solid sea ice thickness
C     HO as a function of winds, currents and available grease ice
C     volume. Grease ice does not significantly reduce heat loss
C     from the ocean in winter and area covered by grease is thus
C     handled like open water.
C     (For details see Smedsrud and Martin, 2014, Ann.Glac.)
C     Set SItrName(1) = 'grease' in namelist SEAICE_PARM03 in data.seaice
C     then output SItr01 is SItrNameLong(1) = 'grease ice volume fraction',
C     with SItrUnit(1) = '[0-1]', which needs to be multiplied by SIheff
C     to yield grease ice volume. Additionally, the actual grease ice
C     layer thickness (diagnostic SIgrsLT) can be saved.
#undef SEAICE_GREASE
C--   grease ice uses SItracer:
#ifdef SEAICE_GREASE
# define ALLOW_SITRACER
# define ALLOW_SITRACER_ADVCAP
#endif

C--   By default the seaice model is discretized on a B-Grid (for
C     historical reasons). Define the following flag to use a new
C     (not thoroughly) test version on a C-grid
#define SEAICE_CGRID

C--   Only for the C-grid version it is possible to
#ifdef SEAICE_CGRID
C     enable JFNK code by defining the following flag
# undef  SEAICE_ALLOW_JFNK
C     enable LSR to use global (multi-tile) tri-diagonal solver
# undef SEAICE_GLOBAL_3DIAG_SOLVER
C     enable EVP code by defining the following flag
CML this can be undefined since you dont use it
# define SEAICE_ALLOW_EVP
# ifdef SEAICE_ALLOW_EVP
C--   When set use SEAICE_zetaMin and SEAICE_evpDampC to limit viscosities
C     from below and above in seaice_evp: not necessary, and not recommended
#  undef SEAICE_ALLOW_CLIPZETA
# endif /* SEAICE_ALLOW_EVP */
C     regularize zeta to zmax with a smooth tanh-function instead
C     of a min(zeta,zmax). This improves convergence of iterative
C     solvers (Lemieux and Tremblay 2009, JGR). No effect on EVP
# undef SEAICE_ZETA_SMOOTHREG
C     allow the truncated ellipse rheology (runtime flag SEAICEuseTEM)
# undef SEAICE_ALLOW_TEM
C     Use LSR vector code; not useful on non-vector machines, because it
C     slows down convergence considerably, but the extra iterations are
C     more than made up by the much faster code on vector machines. For
C     the only regularly test vector machine these flags a specified
C     in the build options file SUPER-UX_SX-8_sxf90_awi, so that we comment
C     them out here.
C# define SEAICE_VECTORIZE_LSR
C# ifdef SEAICE_VECTORIZE_LSR
C     Use modified LSR vector code that splits vector loop into two with
C     step size 2. This modification improves the convergence of the vector
C     code dramatically, so that is may actually be useful in general, but
C     that needs to be tested.
C#  define SEAICE_VECTORIZE_LSR_ZEBRA
CML replacing this with new flag:
# undef SEAICE_VECTORIZE_LSR
C     Use zebra-method (alternate lines) for line-successive-relaxation
C     This modification improves the convergence of the vector code
C     dramatically, so that is may actually be useful in general, but
C     that needs to be tested. Can be used without vectorization options.
# undef SEAICE_LSR_ZEBRA
C     Use parameterisation of grounding ice for a better representation
C     of fastice in shallow seas
C# endif
#else /* not SEAICE_CGRID, but old B-grid */
C--   By default for B-grid dynamics solver wind stress under sea-ice is
C     set to the same value as it would be if there was no sea-ice.
C     Define following CPP flag for B-grid ice-ocean stress coupling.
# define SEAICE_BICE_STRESS

C--   By default for B-grid dynamics solver surface tilt is obtained
C     indirectly via geostrophic velocities. Define following CPP
C     in order to use ETAN instead.
# define EXPLICIT_SSH_SLOPE
C--   Defining this flag turns on FV-discretization of the B-grid LSOR solver.
C     It is smoother and includes all metric terms, similar to C-grid solvers.
C     It is here for completeness, but its usefulness is unclear.
# undef SEAICE_LSRBNEW
#endif /* SEAICE_CGRID */

C--   When set limit the Ice-Loading to mass of 1/5 of Surface ocean grid-box
#undef SEAICE_CAP_ICELOAD
C--   When set use SEAICE_clipVelocties = .true., to clip U/VICE at 40cm/s,
C     not recommended
#define SEAICE_ALLOW_CLIPVELS
C--   When set cap the sublimation latent heat flux in solve4temp according
C     to the available amount of ice+snow. Otherwise this term is treated
C     like all of the others -- residuals heat and fw stocks are passed to
C     the ocean at the end of seaice_growth in a conservative manner.
C     SEAICE_CAP_SUBLIM is not needed as of now, but kept just in case.
#undef SEAICE_CAP_SUBLIM

C--   Enable free drift code
#define SEAICE_ALLOW_FREEDRIFT

C--   pkg/seaice cost functions compile flags
c       >>> Sea-ice volume (requires pkg/cost)
#undef ALLOW_COST_ICE
c       >>> Sea-ice misfit to obs (requires pkg/cost and ecco)
#undef ALLOW_SEAICE_COST_SMR_AREA

C--   enforce cfl condition without cuting sensitivity flow
c#define ALLOW_CFL_FIX

C--   cut the adjoint dependency to hactual, etc.
c# undef SEAICE_SIMPLIFY_GROWTH_ADJ

C--   go through heff and open ocean
c#define SEAICE_MODIFY_GROWTH_ADJ

#endif /* ALLOW_SEAICE */
#endif /* SEAICE_OPTIONS_H */

CEH3 ;;; Local Variables: ***
CEH3 ;;; mode:fortran ***
CEH3 ;;; End: ***

ECCOv4r4_grid.nc

Include maskC/W/S as fields.

Issue raised in ecco-support in Dec 2019.

gencost_outputlevel(0)

@owang01

igen_XXX terms for sea level cost are initialized as zero and then set to k if a matching name appears in the gencost_name array:

cost_gencost_sshv4.F:
igen_gmsl=0
cost_gencost_sshv4.F:
if (gencost_name(k).EQ.'sshv4-gmsl') igen_gmsl=k

However, later these igen_XXX terms are used as array indices:
cost_gencost_sshv4.F: if (gencost_outputlevel(igen_gmsl).GT.0) then

without checking to make sure that igen_XXX > 0.

This results in array out of bounds.

Provide easy access to Climatology forcing

Put runoff and geothermal heat forcing in a separate auxillary tar within ancillary data

In v4r4 they were bundled with all of the atmosphere forcing and that file is 192 GB, too big for quick budget analysis

Suggest from Andrew D.

Inconsistency in r4 configuration data.exch2

The data.exch2 file in the Release 4 branch is inconsistent with the same file in input_init/NAMELIST on the NASA drive (I accessed it via https://podaac.jpl.nasa.gov/dataset/ECCO_L4_ANCILLARY_DATA_V4R4) and appears to be missing three entries in the blank list for the 15x15 (360 core) configuration. I believe the NASA version is correct as it has 108 entries whereas the version in this repo has 105, and is missing 188, 242 and 376.

From this repository:

#15x15 nprocs = 360
#blankList(1:108)=1,2,3,4,5,6,7,8,9,10,11,12,14,15,16,17,18,21,22,23,24,
#65,71,75,76,90,95,96,101,102,109,110,111,112,113,114,115,116,117,118,119,
#120,121,122,123,124,125,126,127,128,129,130,131,132,
#189,190,193,194,195,196,199,
#200,201,202,203,205,206,207,208,209,211,212,213,214,215,216,247,253,
#267,268,269,270,287,288,305,306,323,324,341,342,359,360,362,377,378,
#380,381,382,395,396,400,412,413,414,430,

From the NASA drive:

#15x15 nprocs = 360
#blankList(1:108)=1,2,3,4,5,6,7,8,9,10,11,12,14,15,16,17,18,21,22,23,24,
#65,71,75,76,90,95,96,101,102,109,110,111,112,113,114,115,116,117,118,119,
#120,121,122,123,124,125,126,127,128,129,130,131,132,
#188,189,190,193,194,195,196,199,
#200,201,202,203,205,206,207,208,209,211,212,213,214,215,216,242,247,253,
#267,268,269,270,287,288,305,306,323,324,341,342,359,360,362,376,377,378,
#380,381,382,395,396,400,412,413,414,430,

End-of-file namelist errors on two Cray HPC platforms

Hello,

I am attempting to reproduce ECCOv4-r4 on two Cray HPC platforms, namely on both ARCHER and the new ARCHER2 for benchmarking purposes. I am using this document as guidance:

https://ecco-group.org/docs/v4r4_reproduction_howto.pdf

The code compiles without issue. For context, I used the Cray compiler on ARCHER and the gnu compiler on ARCHER2. (We are still working out how to use the Cray compiler for MITgcm on ARCHER2. It's a brand new machine.)

I received end-of-file errors when running the code as-is, and I traced these errors to the use of "&" characters as end-of-namelist characters in some of the data* files. That is, in some of the namelists, the "/" character is the end-of-namelist marker, whereas in others, it is the "&" character.

After I changed all of the end-of-namelist characters to "/", the code ran without error.

I'm guessing that you probably don't want to change your configurations, but it might be a good idea to note this somewhere in the documentation. I'm happy to contribute in some way if you like - please let me know how you would like to proceed.

with ice shelf cavities Depth.data is no longer seafloor depth but distance from seafloor to sea surface

I expected the model grid output field Depth.data to be seafloor depth, but in the V4r5 output the values of Depth.data`` are now the distance between the seafloor and the sea surface. In open water areas, this is expected but where we have ice-shelf cavities, Depth.data`` is the distance between the seafloor and the ceiling of the ice shelf cavity. We need to make it seafloor depth again to correspond with its name. @owang01

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.