Giter Site home page Giter Site logo

escomp / ctsm Goto Github PK

View Code? Open in Web Editor NEW
296.0 34.0 301.0 45.87 MB

Community Terrestrial Systems Model (includes the Community Land Model of CESM)

Home Page: http://www.cesm.ucar.edu/models/cesm2.0/land/

License: Other

Perl 3.21% XSLT 0.14% NCL 0.63% Python 9.48% Shell 0.73% CMake 0.43% Fortran 85.18% Makefile 0.16% Common Lisp 0.02% Dockerfile 0.01% nesC 0.02% Batchfile 0.01%
land climate hydrology ecosystem land-surface-model ncar cesm clm

ctsm's Introduction

CTSM

DOI

Overview and resources

The Community Terrestrial Systems Model.

This includes the Community Land Model (CLM5.0 and CLM4.5) of the Community Earth System Model.

For documentation, quick start, diagnostics, model output and references, see

http://www.cesm.ucar.edu/models/cesm2.0/land/

and

https://escomp.github.io/ctsm-docs/

For help with how to work with CTSM in git, see

https://github.com/ESCOMP/CTSM/wiki/Quick-start-to-CTSM-development-with-git

and

https://github.com/ESCOMP/ctsm/wiki/Recommended-git-setup

For support with model use, troubleshooting, etc., please use the CTSM forum or other appropriate forum (e.g., for infrastructure/porting questions) through the CESM forums.

To get updates on CTSM tags and important notes on CTSM developments join our low traffic email list:

https://groups.google.com/a/ucar.edu/forum/#!forum/ctsm-dev

(Send email to [email protected] if you have problems with any of this)

CTSM code management team

CTSM code management is provided primarily by:

Software engineering team:

Science team:

FATES Project:

Perturbed Parameter Experiment (PPE) Science team:

ctsm's People

Contributors

adrifoster avatar bandre-ucar avatar billsacks avatar cenlinhe avatar ciceconsortium avatar ckoven avatar djk2120 avatar dlawrenncar avatar ekluzek avatar fang-bowen avatar glemieux avatar ivanderkelen avatar jedwards4b avatar jtruesdal avatar keerzhang1 avatar lmbirch89 avatar mathewvrothstein avatar mvdebolskiy avatar nanr avatar negin513 avatar olyson avatar rgknox avatar rosiealice avatar samsrabin avatar slevis-lmwg avatar sunnivin avatar swensosc avatar teaganking avatar wwieder avatar yanyanchenghydro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ctsm's Issues

mkmapdata should use shared mapping tools

Bill Sacks < [email protected] > - 2013-05-07 15:48:42 -0600
Bugzilla Id: 1683
Bugzilla Depends: 1938,
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected],

Mike Levy suggested that CLM's mkmapdata tool should use the shared mapping tools, and in particular tools/mapping/gen_mapping_files/gen_ESMF_mapping_file/create_ESMF_map.sh -- rather than directly calling the ESMF tool. The main reason for this is that the shared tool can provide an insulating layer, requiring less maintenance of the CLM tool if the version or interface of the ESMF tool changes in the future.

We wanted to get this into the cesm1.2 release, but time limitations prevented doing so.

Here are some notes on what needs to be done to accomplish this:

First, some notes from Mike Levy:

I've created

https://svn-ccsm-models.cgd.ucar.edu/tools/mapping/trunk_tags/mapping_130410

with all of the changes we talked about (and more). To generate the file

map_${grid}_${lmask}to${res}nomask_aave_da$CDATE.nc

You'll want to run

$CESMROOT/tools/mapping/gen_mapping_files/gen_ESMF_mapping_file/create_ESMF_map.sh -fsrc [file corresponding to ${grid}] -nsrc ${grid}_${lmask} -fdst [file corresponding to ${res}] -ndst ${res}_nomask -map aave --clm_name

You can add the following (see the README file in gen_ESMF_mapping_file for more options):

  • If the source file is a regional map, use -tsrc regional
  • If the destination file is a regional map, use -tdst regional
  • If you want to run the serial ESMF implementation, use -serial
  • If you need the 64bit offset, use -big
  • If you want to pass any other flags to ESMF, use --pass2esmf $FLAGS

This should work on yellowstone, caldera, and geyser. It may even work on jaguar, but as I said I've never tried.

Now, my notes (some of this repeats what Mike said):

  • need to pass -clm_name (this forces the tool to use the CLM naming convention, which differs somewhat from the naming convention used elsewhere for mapping files)

  • need to pass src & dest names, NOT weight file name. Note that these names should include the mask so that the output name looks right

  • For the case where we point to the beta snapshot, we will still need to set ESMFBIN_PATH, to override what's set in Mike's script. This refers to this code:

    if [ "$lrgfil" = "--netcdf4" ] || [ ${SRC_TYPE[nfile]} = "UGRID" ] || [ $DST_TYPE = "UGRID" ]; then
    case $hostname in
    ys* | caldera* | geyser* )
    if [ $mpitype = "mpiuni" ]; then
    MY_ESMF_REGRID=/glade/p/work/svasquez/ESMF620bs18-mpiuni/bin/ESMF_RegridWeightGen
    else
    MY_ESMF_REGRID=/glade/p/work/svasquez/ESMF620bs18/bin/ESMF_RegridWeightGen
    fi
    ;;
    *)
    echo "No support for --netcdf4 or UGRID on machines other than yellowstone/caldera/geyser"
    exit 5
    ;;
    esac
    fi

--- TESTING ---

  • test generation of a global and single-point mapping file in a single submission

    • make sure mapping files are created, with the same name as before
  • test generation of a global and single-point mapping file for Sean Swenson's file -- i.e., the topostats 1km file, which triggers the logic given above (pointing to the beta snapshot in the svasquez directory)

    • make sure mapping files are created, with the same name as before

--- DEALING WITH THE LOGIC POINTING TO THE BETA SNAPSHOT CODE ---

Once the 6.2.0 release is made for the ESMF tool, we can remove the explicit setting of ESMFBIN_PATH in the CLM mkmapdata.sh -- i.e., we can remove the above logic, as long as the shared mapping tools point to the new 6.2.0 release. At that point, all of stuff related to 'hostname' in mkmapdata.sh can go away, I think.

Since we're no longer trying to get this done for the cesm1.2 release, it's probably worth waiting until the shared mapping tool can be updated to 6.2.0 before making any of the changes suggested in this bug report -- since at that point, mkmapdata.sh can become simpler, no longer requiring any knowledge of the host or any paths to the esmf tool.

Paths wrong in CLM's build-namelist after create_clone

Bill Sacks < [email protected] > - 2011-10-20 10:49:00 -0600
Bugzilla Id: 1426
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected],

After running create_clone followed by configure -case, the paths used by CLM's build-namelist tool point to the cloned case; they should point to the new case directory.

For example, I cloned a case named test_diags_1014_clm4_0_37, to make a new case test_diags_1017. Here are the paths as documented in clm.buildnml.csh:

#! lnd_in:: Comment:
#! This namelist was created using the following command-line:
#! /glade/proj3/cseg/people/sacks/cesm_code/clm_landice_diagnostics2/models/lnd/clm/bld/build-namelist -config /glade/proj3/cseg/people/sacks/cesm_tests/test_diags_1014_clm4_0_37/Buildconf/clmconf/config_cache.xml -res 1.9x2.5
-mask gx1v6 -ignore_ic_year -use_case 2000_glacierMEC_control -infile cesm_namelist -clm_start_type default -rtm_res R05 -rtm_tstep 10800 -co2_ppmv 367.0 -datm_presaero clim_2000 -glc_grid gland20 -namelist &clm_inparm / -csmdata
$DIN_LOC_ROOT -inputdata /glade/proj3/cseg/people/sacks/cesm_tests/test_diags_1014_clm4_0_37/Buildconf/clm.input_data_list
#! For help on options use: /glade/proj3/cseg/people/sacks/cesm_code/clm_landice_diagnostics2/models/lnd/clm/bld/build-namelist -help

I don't understand the implications of this, but Erik says it's bad.

remove duplication in setting of history fields for carbon isotopes (c13 and c14)

Bill Sacks < [email protected] > - 2016-02-16 14:09:01 -0700
Bugzilla Id: 2284
Bugzilla CC: [email protected],

There is a lot of near-duplicate code in the InitHistory subroutines of CNVegCarbonStateType.F90 and SoilBiogeochemCarbonStateType.F90 (and maybe elsewhere): Basically the same blocks of code are done for c12, c13 and c14. At a recent CLM-CMT meeting, we decided that this duplication should be removed. Steps to doing so are:

(1) Examine differences between the blocks for c12, c13, c14: confirm that the below steps capture all relevant differences

(2) For variables that currently have hist fields for c12, but not c13 / c14: determine if the exclusion for c13 & c14 is intentional or accidental. (This step is the main thing stopping me from just going ahead and doing this now.)

(3) Introduce the following variables that differ for each instance:

  • shortname_prefix: blank for c12, 'C13_' or 'C14_'

  • longname_prefix: blank for c12, 'C13 ' or 'C14 '

  • units: 'gC', 'gC13', 'gC14'

(4) Use the above variables where they are needed; e.g.:

call hist_addfld1d (fname=shortname_prefix//'LEAFC', ...)

(5) Introduce a variable default_for_non_isotope_fields (probably can come up with a better name). This is 'active' for c12, 'inactive' for c13 and c14. Then, for fields that are currently active by default for c12, but inactive by default for c13 or c14 (or weren't added at all for c13/c14), set: default=default_for_non_isotope_fields

(6) Check header of an h0 history file in a BGC case: should be the same as before

problem with the bounds of some history fields

Bill Sacks < [email protected] > - 2013-08-17 07:46:54 -0600
Bugzilla Id: 1786
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

There is a potential problem with the bounds of some history fields. My guess is that this doesn't cause any problems now, but could cause problems in the future, if either (1) hist_update_hbuf was called within a threaded region (right now it's not), or (2) assumptions were made about the lower bound of arrays in hist_update_hbuf.

The problem arises from associating a pointer with an array slice, as in:

ptr => target(:, 1:n)

When you do this, the lower bound of ptr is reset to 1. Contrast this to:

ptr => target

in which case ptr retains the lower bounds of target.

Specifically, this occurs in:

(1) hist_update_hbuf_field_2d

field          => clmptr_ra(hpindex)%ptr(:,1:num2d)

(2) histFldsMod; e.g.:

      data1dptr => ccs%decomp_cpools(:,l)

(and maybe elsewhere - I haven't done an extensive search)

I believe this can be solved with the following syntax:

! In the following, we need to explicitly set the lower bound of 'field', otherwise
! it gets set to 1 when it's associated with the slice of 'ptr'
arr_lbound = lbound(clmptr_ra(hpindex)%ptr, 1)
field(arr_lbound: , 1:) => clmptr_ra(hpindex)%ptr(:,1:num2d)

but I haven't tested this.

For now, in the interest of time, I am working around this problem simply by NOT explicitly specifying the bounds of the history fields in calls to p2g/c2g/l2g in hist_update_hbuf; e.g., I am calling these routines like:

      call p2g(bounds, &
           field, &
           field_gcell(bounds%begg:bounds%endg), &
           p2c_scale_type, c2l_scale_type, l2g_scale_type)

rather than like:

      call p2g(bounds, &
           field(bounds%begp:bounds%endp), &
           field_gcell(bounds%begg:bounds%endg), &
           p2c_scale_type, c2l_scale_type, l2g_scale_type)

I think this should be okay for now, but wouldn't work if this was called from within a threaded region.

incorrect version identifiers in text printed by source code

Bill Sacks < [email protected] > - 2014-01-13 14:18:55 -0700
Bugzilla Id: 1898
Bugzilla CC: [email protected], [email protected],

The CLM version numbers are incorrect in various text that is printed by the model. Some examples are:

clm4_0/biogeochem/CNDVMod.F90:265: str = 'Community Land Model: CLM3'
clm4_5/biogeochem/CNDVMod.F90:181: str = 'Community Land Model: CLM3'
clm4_5/main/clm_varctl.F90:36: character(len=256), public :: source = "Community Land Model CLM4.0" ! description of this source

Add a test that ensures that setting all_active = .true. doesn't change answers for gridcell averages

Bill Sacks < [email protected] > - 2016-06-13 20:37:51 -0600
Bugzilla Id: 2322
Bugzilla CC: [email protected], [email protected],

Changing which 0-weight points are active should not change answers for gridcell averages. If this does change answers, it indicates a bug: values in 0-weight patches or columns should not affect the gridcell averages.

I have found two bugs by doing a manual test like this: bug 1851 and bug 2321. (See notes in bug 1851 for more detailed thoughts.)

We should add an automated test of this. Specifically, it could do two runs: one out-of-the-box and one with all_active = .true. (probably with finidat = ' ', because it can be problematic to set all_active = .true. with an finidat file not set up for that situation - although maybe we could use an initial conditions file along with init_interp?) The gridcell-level history fields should be bit-for-bit identical in these two runs. We could do that with a mechanism similar to the current LII test. However, this will hopefully become more straightforward once CESM-Development/cime#146 is resolved - hopefully making it relatively trivial to create one-off tests like this.

dynamic landunits: handle methane with changing lake area

Bill Sacks < [email protected] > - 2016-02-16 13:38:10 -0700
Bugzilla Id: 2283
Bugzilla CC: [email protected], [email protected], [email protected],

The methane code (ch4Mod.F90) implicitly assumes that finundated is 1 for lakes. Thus, if lake area increases, the right thing to do is probably to immediately inundate any land it took over, with code similar to what happens right now when inundated area increases.

Decreases in lake area should be handled appropriately now, but increases in lake area are not. Currently, there is no way for lakes to increase in area, but this will need to be revisited once they can.

reorganize how mocks are done for unit testing

Bill Sacks < [email protected] > - 2014-02-14 15:16:00 -0700
Bugzilla Id: 1923
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

I'm starting to think that I should have created mocks a level lower than where they currently are. For example, I currently mock out modules that use mpi, spmdMod, mct, etc. But this is leading to too many mocks. It might be better to mock out those problematic things - i.e., create mock versions of mpi, spmdMod, mct, etc. Then we could build the regular versions of modules that use them (they would just end up using the mocked-out version).

need better way to find all possible history fields

Bill Sacks < [email protected] > - 2015-06-26 11:29:48 -0600
Bugzilla Id: 2187
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

Danica raised the point that it is hard to find all possible history fields with the new CLM code organization. I know we have findHistFields.pl, but it doesn't seem to work right from the latest code base: it finds a lot of BGC-related fields, but didn't find the fields we were looking for (RH and QBOT in atm2lndType.F90).

So maybe this tool just needs to be fixed? Or maybe I was using it wrong.

We also felt that this should either (a) print more information about the fields (e.g. whether it's active or inactive by default) or (b) print the file in which a given field is defined, so that you can go there to find more information about the field.

In looking through this code, I also noticed that the list of files to search is hard-coded. It seems like this list will quickly get out-of-date, if it isn't already. Can it simply search all F90 files below src?

Alternatively, if this tool is too hard to maintain, perhaps another option would be to give a namelist option to CLM to make it write out all history fields to its log file in initialization.

water balance error in clm5 kitchen sink test when adding zbedrock to surface dataset

Bill Sacks < [email protected] > - 2016-01-28 09:54:11 -0700
Bugzilla Id: 2277
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

From https://svn-ccsm-models.cgd.ucar.edu/clm2/branch_tags/rework_fglcmask_tags/rework_fglcmask_n08_clm4_5_7_r164

this test dies with a water balance error:

ERS_Ly3.f10_f10.I1850CRUCLM50BGCCROP.yellowstone_intel.clm-clm50KSinkMOut

The full error message is below.

The relevant change on this branch is that it uses a new surface dataset, which includes zbedrock - so it triggers the new use_bedrock code. I have confirmed that the test passes if I point to a version of the new surface dataset that is identical except that zbedrock is removed.

27: WARNING: water balance error nstep= 8393 local indexc= 3872
27: errh2o= 2.401732114284960E-005
27: clm model is stopping - error is greater than 1e-5 (mm)
27: nstep = 8393
27: errh2o = 2.401732114284960E-005
27: forc_rain = 0.000000000000000E+000
27: forc_snow = 0.000000000000000E+000
27: endwb = 6951.93139686015
27: begwb = 6951.98520312504
27: qflx_evap_tot = 5.975657251271233E-003
27: qflx_irrig = 0.000000000000000E+000
27: qflx_surf = 0.000000000000000E+000
27: qflx_h2osfc_surf = 0.000000000000000E+000
27: qflx_qrgwl = 0.000000000000000E+000
27: qflx_drain = 4.785462495282254E-002
27: qflx_drain_perched = 0.000000000000000E+000
27: qflx_flood = 0.000000000000000E+000
27: qflx_ice_runoff_snwcp = 0.000000000000000E+000
27: qflx_ice_runoff_xs = 0.000000000000000E+000
27: qflx_glcice_melt = 0.000000000000000E+000
27: qflx_glcice_frz = 0.000000000000000E+000
27: clm model is stopping
27: calling getglobalwrite with decomp_index= 3872 and clmlevel= column
27: local column index = 3872
27: global column index = 3579
27: global landunit index = 1038
27: global gridcell index = 238
27: gridcell longitude = 15.0000000000000
27: gridcell latitude = 70.0000000000000
27: column type = 215
27: landunit type = 2
27: ENDRUN:
27: ERROR in BalanceCheckMod.F90 at line 420

360x720 mapping directory name disagrees with resolution name

Bill Sacks < [email protected] > - 2013-04-13 05:16:34 -0600
Bugzilla Id: 1662
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected],

Recently, the 360x720 resolution was renamed to 360x720cru. However, the directory in inputdata/lnd/clm2/mappingdata/maps is still named 360x720. This (a) could cause confusion, and more importantly (b) messes up the tools we have to post-process files created by mkmapdata (automatically creating xml entries and moving files into the correct location).

I'm not sure what the correct solution is:

(1) Live with it? It seems like we'll always be tripping up on this, having to correct the automated mkmapdata post-processing scripts.

(2) Fix the mkmapdata post-processing tools so they give special handling to this grid?

(3) Rename the mapping directory in inputdata? But that will break any older tags.

(4) Rename the mapping directory in inputdata, and make a sym link in yellowstone's inputdata (360x720 -> 360x720cru) so that old tags will at least continue to work there. This is probably my preference.

Note that a renaming of the inputdata directory will require (a) changing paths in the namelist defaults files, and (b) require running some CESM tests and tools tests at 360x720cru resolution, to make sure all necessary paths have been changed correctly. These tests should be done BEFORE making the sym link suggested in (4).

Because this might be a pain, I think it can wait until after the release.

remove cprnc.pl and cprnc.ncl; use standard cprnc instead

Bill Sacks < [email protected] > - 2013-05-29 12:59:52 -0600
Bugzilla Id: 1716
Bugzilla CC: [email protected], [email protected],

My understanding is that cprnc.ncl and its cprnc.pl wrapper exist only because the standard fortran-based cprnc wasn't able to compare files without a time dimension. As of cprnc_130529, this is fixed. So I propose removing these CLM-specific cprnc scripts, and switching testing to use the standard cprnc instead.

Advantages:

(1) no longer need to maintain these scripts

(2) I recently found a problem with the cprnc.ncl script - it doesn't flag files as different if they differ in where there are missing values (see bug 1714). The fortran-based cprnc handles this correctly

(3) The fortran-based cprnc gives you more information about differences than you can get from cprnc.ncl.

(4) Post-processing scripts that look at cprnc output (e.g., summarize_cprnc_diffs) don't work on cprnc.ncl output; so if we switch to using the fortran-based program, these post-processing scripts can be used.

For history fields, change 'active' terminology

Bill Sacks < [email protected] > - 2013-02-22 20:13:32 -0700
Bugzilla Id: 1631
Bugzilla CC: [email protected],

I am adding 'active' flags saying: 'should computations be done here?' This leads to overloading of the term 'active' in histFileMod, which can lead to confusion. Erik suggests that we should eventually rename 'active' as it applies to history fields, perhaps to something like default_on.

problems with new CNFireMod

Bill Sacks < [email protected] > - 2013-05-01 15:24:28 -0600
Bugzilla Id: 1679
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected],

This started with a search through the CNFireMod code for potentially-incorrect uses of weights, now that I have introduced 'active' flags. But in looking for this, I noticed a number of questionable things in this code, so decided to review the entire module. Here are some problems I found.

Some of these are cosmetic (e.g., variable names); others are not bugs per se, but make the code fragile to future changes; and others look like bugs (and if they aren't bugs, then they need some explanation about why they're correct even though they look incorrect).

(1) Many variables are declared as "! local pointers to implicit in scalars", when in fact they are implicit out or in/out

(2) There are a number of cryptic variable names, some with no description. For example:

real(r8) :: lh !
real(r8) :: fs !
real(r8) :: ig !

(3) There are a number of places where the code checks whether the current pft is crop or natural veg. This is done like this:

       ! For crop veg types
       if( ivt(p) > nc4_grass )then

       ! For natural vegetation (non-crop)
       if( ivt(p) >= ndllf_evr_tmp_tree .and. ivt(p) <= nc4_grass )then

This is very fragile code, which will break if anyone adds new natural PFTs to the beginning or end of the list. A possible replacement for crop is - I think - to use npcropmin & npcropmax (defined in pftvarcon.F90) - though I'm not sure whether that does the right thing when running without the specific crop types?

(4) Similarly, checks of the type of pft:

                   if( ivt(p) .ge. nbrdlf_evr_shrub )then      !for shurb and grass

...
else ! for trees

This assumes that people adding PFTs in the future will add them in the "right" place, where "right" is determined by the logic embedded here. This should probably be handled by adding constants to the pft physiology file.

The idea of someone wanting to add PFTs is not purely theoretical: Andy Jones just told me today that he is adding some PFTs for some of his work. So I am leery of code that makes this difficult or error-prone.

(5) There are some seemingly ad-hoc conditionals, some of which seem incorrect. In particular, there seem to be terms added to conditionals that don't need to (or shouldn't) be there. For example:

(a)
! For crop PFT's
if( ivt(p) > nc4_grass .and. wtcol(p) > 0._r8 .and. leafc_col(c) > 0._r8 )then
fuelc_crop(c)=fuelc_crop(c) + (leafc(p) + leafc_storage(p) + &
leafc_xfer(p))*wtcol(p)/cropf_col(c) + &
totlitc(c)*leafc(p)/leafc_col(c)*wtcol(p)/cropf_col(c)
end if

Why should this only be done if leafc_col(c) > 0? Note that the first parenthesized term can be non-zero even if leafc_col(c) == 0, because leafc_storage or leafc_xfer could be > 0.

Also:

(b)
if( .not. shr_infnan_isnan(btran2(p)) .and. btran2(p) .le. 1._r8 )then

Why is this check needed? Since this code is in a loop over filter_soilc points, it shouldn't include points that have NaN values; and I can't see how btran2 could be greater than 1 -- and if it ever were greater than 1, I would question whether it's really correct to exclude points that have values greater than 1: is it really correct to treat points with values of 1.0000000001 (maybe due to rounding error??) fundamentally differently from points with values of 1.0?

(6) More intermediate variables (and/or comments) are needed to make it more clear what this code is doing, so people other than the author can modify it in the future. For example, in the example in #5a, I have no idea what this term is trying to accomplish:

                        totlitc(c)*leafc(p)/leafc_col(c)*wtcol(p)/cropf_col(c)

As another example, this equation is very hard to read:

    baf_peatf(c) = boreal_peatfire_c*exp(-SHR_CONST_PI*(max(wf2(c),0._r8)/0.3_r8))* &
    max(0._r8,min(1._r8,(tsoi17(c)-SHR_CONST_TKFRZ)/10._r8))*peatf_lf(c)* &
    (1._r8-fsat(c))

(7) There are many places where long expressions do two things at once: (1) compute some pft-level variable with a complex expression, and (2) average from pft to column -- all in a single line of code. The example in #5a, above, illustrates this.

The code would be clearer and easier to modify if these expressions were split into two pieces: (1) first compute pft-level variable for all quantities, and then (2) average to column, preferably via a call to p2c.

(6) The use of cwtgcell in this code is suspicious to me:

             if( ivt(p) == nbrdlf_evr_trp_tree .and. wtcol(p) .gt. 0._r8 )then
                trotr1_col(c)=trotr1_col(c)+wtcol(p)*cwtgcell(c)
             end if
             if( ivt(p) == nbrdlf_dcd_trp_tree .and. wtcol(p) .gt. 0._r8 )then
                trotr2_col(c)=trotr2_col(c)+wtcol(p)*cwtgcell(c)
             end if
             if( ivt(p) == nbrdlf_evr_trp_tree .or. ivt(p) == nbrdlf_dcd_trp_tree )then
                if(lfpftd(p).gt.0._r8)then
                   dtrotr_col(c)=dtrotr_col(c)+lfpftd(p)*cwtgcell(c)
                end if
             end if

There appears to be no other science code (i.e., non-infrastructure code) that depends on weights on the grid cell. I believe that all code is intended to compute per-area quantities, so a column's weight on the grid cell should be irrelevant in computing a column-level quantity.

If this code is truly correct, it needs more explanation.

(7) Why does the normalization differ for these two averages from pft to col:

          rootc_col(c) = rootc_col(c) + (frootc(p) + frootc_storage(p) + &
                         frootc_xfer(p) + deadcrootc(p) +                &
                         deadcrootc_storage(p) + deadcrootc_xfer(p) +    &
                         livecrootc(p)+livecrootc_storage(p) +           &
                         livecrootc_xfer(p))*wtcol(p)

          fsr_col(c) = fsr_col(c) + fsr_pft(ivt(p))*wtcol(p)/(1.0_r8-cropf_col(c))

i.e., why does the second normalize by (1-cropf_col), but the first does not?

(8) There are lots of magic numbers. For example:

    fire_m   = exp(-SHR_CONST_PI *(m/0.69_r8)**2)*(1.0_r8 - max(0._r8, &
               min(1._r8,(forc_rh(g)-30._r8)/(70._r8-30._r8))))*  &
               min(1._r8,exp(SHR_CONST_PI*(forc_t(g)-SHR_CONST_TKFRZ)/10._r8))
    lh       = 0.0035_r8*6.8_r8*hdmlf**(0.43_r8)/30._r8/24._r8
    fs       = 1._r8-(0.01_r8+0.98_r8*exp(-0.025_r8*hdmlf))
    ig       = (lh+forc_lnfm(g)*0.25_r8)*(1._r8-fs)*(1._r8-cropf_col(c)) 

(9) In this code and some following code, normalizations are sometimes done by (1 - cropf_col), and sometimes by lfwt:

                   if( ivt(p) .ge. nbrdlf_evr_shrub )then      !for shurb and grass
                      lgdp_col(c)  = lgdp_col(c) + (0.1_r8 + 0.9_r8*    &
                                     exp(-1._r8*SHR_CONST_PI* &
                                     (gdp_lf(c)/8._r8)**0.5_r8))*wtcol(p) &
                                     /(1.0_r8 - cropf_col(c))
                      lgdp1_col(c) = lgdp1_col(c) + (0.2_r8 + 0.8_r8*   &
                                     exp(-1._r8*SHR_CONST_PI* &
                                     (gdp_lf(c)/7._r8)))*wtcol(p)/lfwt(c)
                      lpop_col(c)  = lpop_col(c) + (0.2_r8 + 0.8_r8*    &
                                     exp(-1._r8*SHR_CONST_PI* &
                                     (hdmlf/450._r8)**0.5_r8))*wtcol(p)/lfwt(c)

Is lfwt = (1 - cropf_col)?? If so, these normalizations should be made consistent. If not, why do they differ, and why are some normalizations done by one factor and others by another?

(10) The computations of lgdp_col, lgdp1_col and lpop_col are hard to decipher, and could use some explanatory comments and/or code cleanup (e.g., use of intermediate variables) to make it more clear what's going on here.

(11) The following code needs some explanation:

if (fpftdyn /= ' ') then !true when landuse data is used
do fc = 1,num_soilc
c = filter_soilc(fc)
if( dtrotr_col(c) .gt. 0._r8 )then
if( kmo == 1 .and. kda == 1 .and. mcsec == 0)then
lfc(c) = 0._r8
end if
if( kmo == 1 .and. kda == 1 .and. mcsec == dt)then
lfc(c) = dtrotr_col(c)dayspyrsecspday/dt
end if
else
lfc(c)=0._r8
end if
end do
end if

It looks like this is trying to re-initialize lfc at the beginning of each year. However, I'm concerned about whether this will behave correctly if drotr_col(c) == 0 on the first and/or second timesteps of the year. In this case the variables won't be initialized for that year. Maybe that's okay - it's just not clear to me if that's what's intended.

(12) It is odd to me that deforestation fires cannot happen on Jan 1 at midnight:

    !
    ! if landuse change data is used, calculate deforestation fires and 
    ! add it in the total of burned area fraction
    !
    if (fpftdyn /= ' ') then    !true when landuse change data is used
       if( trotr1_col(c)+trotr2_col(c) > 0.6_r8 )then
          if(( kmo == 1 .and. kda == 1 .and. mcsec == 0) .or. &
               dtrotr_col(c) <=0._r8 )then
             fbac1(c)        = 0._r8
             farea_burned(c) = baf_crop(c)+baf_peatf(c)

but perhaps there is some good reason for this

(13) Subroutine CNFireFluxes hard-codes a lot of information about the structure of the CN pools. I believe the same is done elsewhere in the CN code. This makes it very hard for anyone to add or remove a carbon or nitrogen pool, because they have to understand and modify code spread all over the model. I'm not sure what the solution is, but I feel like the current implementation is not sustainable, so this is worth some thought.

move calls to alt_calc and SoilBiogeochemVerticalProfile to later in the driver sequence

Bill Sacks < [email protected] > - 2014-12-12 16:42:00 -0700
Bugzilla Id: 2107
Bugzilla CC: [email protected], [email protected], [email protected],

alt_calc and SoilBiogeochemVerticalProfile are called very early in the driver sequence. I think these should be moved later - to sometime after the dynamic subgrid stuff - since as a rule, science code should be called AFTER subgrid weights are updated. This would allow them to just operate on the active filters, rather than the active_and_inactive filters.

This is connected to bug 1668, but I don't know which depends on which.

Minor bug in creation of soil color in mksurfdata_map: points can be given the default soil color, when they should have a real color

Bill Sacks < [email protected] > - 2012-01-29 20:29:57 -0700
Bugzilla Id: 1457
Bugzilla CC: [email protected], [email protected],

I think there is a bug in mksoilcol in module mksoilMod in mksurfdata_map... I haven't done any testing to confirm this, but the logic looks wrong to me:

The relevant code is copied below.

It looks like the intention is that, in the first do loop, wst(0,no) should be 0 if there is ever a non-zero color in this output cell. However, it seems there is an unintentional order dependence here: Consider the following two cases, showing the colors in input grid cells corresponding to a single output grid cell; assume for simplicity that all weights are 0.25:

(1) 0, 0, 0, 1

(2) 1, 0, 0, 0

In case (1), wst(0,no) will be correctly set to 0.0 when the 1 value is read. This will result in soil_color_o = 1.

In case (2), wst(0,no) will be set to 0.0 at first, but upon reading the three 0 values, wst(0,no) will become 0.75. This will result in soil_color_o = 0, which will later be changed to either 4 or 15.

So, if I'm correct in my interpretation of this algorithm, soil_color_o sometimes depends on the order in which values appear in the source grid cells!

 do n = 1,tgridmap%ns
    ni = tgridmap%src_indx(n)
    no = tgridmap%dst_indx(n)
    wt = tgridmap%wovr(n)
    k  = soil_color_i(ni) * tdomain%mask(ni)
    wst(k,no) = wst(k,no) + wt
    if (k>0 .and. wst(k,no)>0.) then
       color(no) = 1
       wst(0,no) = 0.0
    end if
 enddo

 soil_color_o(:) = 0
 do no = 1,ns_o

    ! Rank non-zero weights by color type. wsti(1) is the most extensive
    ! color type. 

    if (color(no) == 1) then
       call mkrank (nsoicol, wst(0:nsoicol,no), miss, wsti, num)
       soil_color_o(no) = wsti(1)
    end if

    ! If land but no color, set color to 15 (in older dataset generic 
    ! soil color 4)
    
    if (nsoicol == 8) then
       if (soil_color_o(no)==0) soil_color_o(no) = 4
    else if (nsoicol == 20) then
       if (soil_color_o(no)==0) soil_color_o(no) = 15
    end if

code for initializing some isotope carbon state variables looks wrong

Bill Sacks < [email protected] > - 2016-07-06 12:27:50 -0600
Bugzilla Id: 2329
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

I'm not sure what this code in CNVegCarbonStateType.F90: InitCold is trying to accomplish, but it looks wrong to me:

if ( .not. is_restart() .and. get_nstep() == 1 ) then

   do p = bounds%begp,bounds%endp
      if (pftcon%c3psn(patch%itype(p)) == 1._r8) then
         this%grainc_patch(p)            = c12_cnveg_carbonstate_inst%grainc_patch(p)         * c3_r2
         this%grainc_storage_patch(p)    = c12_cnveg_carbonstate_inst%grainc_storage_patch(p) * c3_r2
         this%grainc_xfer_patch(p)       = c12_cnveg_carbonstate_inst%grainc_xfer_patch(p)    * c3_r2
         this%dispvegc_patch(p)          = c12_cnveg_carbonstate_inst%dispvegc_patch(p)       * c3_r2
         this%storvegc_patch(p)          = c12_cnveg_carbonstate_inst%storvegc_patch(p)       * c3_r2
         this%totvegc_patch(p)           = c12_cnveg_carbonstate_inst%totvegc_patch(p)        * c3_r2
         this%totc_patch(p)              = c12_cnveg_carbonstate_inst%totc_patch(p)           * c3_r2
         this%woodc_patch(p)             = c12_cnveg_carbonstate_inst%woodc_patch(p)          * c3_r2
      else
         this%grainc_patch(p)            = c12_cnveg_carbonstate_inst%grainc_patch(p)         * c4_r2
         this%grainc_storage_patch(p)    = c12_cnveg_carbonstate_inst%grainc_storage_patch(p) * c4_r2
         this%grainc_xfer_patch(p)       = c12_cnveg_carbonstate_inst%grainc_xfer_patch(p)    * c4_r2
         this%dispvegc_patch(p)          = c12_cnveg_carbonstate_inst%dispvegc_patch(p)       * c4_r2
         this%storvegc_patch(p)          = c12_cnveg_carbonstate_inst%storvegc_patch(p)       * c4_r2
         this%totvegc_patch(p)           = c12_cnveg_carbonstate_inst%totvegc_patch(p)        * c4_r2
         this%totc_patch(p)              = c12_cnveg_carbonstate_inst%totc_patch(p)           * c4_r2
         this%woodc_patch(p)             = c12_cnveg_carbonstate_inst%woodc_patch(p)          * c4_r2
      end if
   end do
end if

It looks like there should probably be a conditional on what the carbon type is of the given instance (i.e., are we operating on c13, c14 or bulk C here?). I believe that c12_cnveg_carbonstate_inst is only present if we're operating on c13 or c14. That raises the question of why this code even runs - since it looks like it's trying to operate on an absent argument for the bulk C case. I think the answer is that this block of code is never, ever executed: I'm thinking that the surrounding conditional:

if ( .not. is_restart() .and. get_nstep() == 1 ) then

is always false in initialization.

I haven't confirmed any of these suspicions....

change restart file format to decrease file size and prevent the need for some interpinic's

Bill Sacks < [email protected] > - 2013-09-26 21:49:39 -0600
Bugzilla Id: 1823
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

With the addition of lots of inactive points in memory for the sake of dynamic landunits, the size of CLM's restart files has increased significantly; e.g., for an f09 file:

old:
-rw-rw-r-- 1 sacks ncar 808M Jun 6 15:11 clmi.I1850CRUCLM45BGC.0241-01-01.0.9x1.25_g1v6_simyr1850_c130606.nc

new:
-rw-r--r-- 1 sacks ncar 2.3G Sep 26 21:03 clmi.I1850CRUCLM45BGC.0241-01-01.0.9x1.25_g1v6_simyr1850_c130926.nc

In addition, I have now had to go through a painful interpinic process twice, when simply adding 0-weight points to CLM's internal arrays.

Thus, I propose modifying the restart file format to remove a bunch of unneeded clutter - and ideally prevent having to run interpinic whenever you change the convention of which 0-weight points are included in CLM's internal arrays.

One idea would be to only write ACTIVE points to the restart file (i.e., using the active flags at the pft, col & landunit levels). This makes sense because inactive points will generally have meaningless values anyway. This would require packing arrays into temporary, active-only arrays when writing restart files. The reverse operation would need to be done when reading restart files - read into a temporary array then unpack into memory by assigning the 1st point in the restart file to the 1st active point in memory, the 2nd point in the restart file to the 2nd active point, etc. Note that this might require doing away with some of the consistency checks (e.g., that the number of pfts on the restart file matches the number computed internally based on the surface dataset) - although with some thought we might be able to maintain equivalent checks (e.g., the number of pfts on the restart file should match the number of ACTIVE pfts computed internally).

Some thought would be needed as to how (if at all) this would require changes in interpinic - though my first thought is that nothing may need to be done in interpinic to handle this change.

Presumably the size of 1-d history files (with dov2xy = .false.) has increased similarly, but I'm not sure if we want to apply these ideas to history files, because this could mess with people's ability to post-process these files.

remove duplicate call to SoilBiogeochemVerticalProfile

Bill Sacks < [email protected] > - 2013-04-19 12:10:46 -0600
Bugzilla Id: 1668
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected],

Currently, decomp_vertprofiles is called from two places: (1) clm_driver.F90, (2) CNDecompAlloc, which is called from CNEcosystemDyn. From talking to Charlie, it sounds like this is a mistake; it should probably just be called from the driver.

Removing the duplicate call from CNDecompAlloc causes answer changes for transient cases, which would take some examination to confirm that this hasn't broken anything. Since we are under time pressure so close to the release freeze, we have decided not to remove this duplicate call for now. However, when someone has time to sign off on the answer changes, this should be re-examined.

Possible changes for having create_crop_landunit always true

Bill Sacks < [email protected] > - 2016-12-05 15:07:27 -0700
Bugzilla Id: 2385
Bugzilla CC: [email protected], [email protected],

Here are some notes from March 28, 2013, about what I thought would need to change once we were ready to set create_crop_landunit to true always. I'm not sure how much here is still correct, relevant, or necessary. I can dig up my other notes that are referenced here if they would be helpful:

Once dynamic landunits are in place, we can always use create_crop_landunit=true. So far this hasn't been possible, because we need to maintain create_crop_landunit=false when doing transient landcover change (because otherwise changing crop area would imply changing landunit areas, which so far hasn't been allowed).

Here are some things that will need to be done to accomplish this:

  • In mkpftMod: mkpftInit: Change setting of num_natpft & num_cft in the case numpft == numstdpft

    • we will now ALWAYS have num_natpft = numstdpft - numstdcft, and num_cft = numpft - num_natpft.
    • so we can get rid of the conditional that sets num_natpft and num_cft
    • see May 14, 2013 notes in Changing weights specification in surface dataset for diffs needed for this (mkpftMod.F90 changes under "Making surface datasets for running with my new code")

    I'm not sure if we're still allowing the use of raw datasets with just the 16 standard PFTs. I know we're allowing that for the transient pft dataset, but I'm not sure about the static-in-time dataset. But it might be irrelevant... with the above change, we might be handling things the same in the code regardless of the number of PFTs on the dataset, at least with respect to the static-in-time datasets (see below for notes on the transient dataset)

  • Need PCT_CROP to be time-varying on the pftdyn dataset.

    When transient pft info comes from a file with just the 17 pfts (as it currently does), this would be interpreted as: PFTs 0-14 treated as before. Assuming that PFT #16 is always 0%, then PCT_CFT can stay fixed in time (for surface datasets with generic crop: always 100% PFT #15; for surface datasets with specific crops: whatever is determined from the raw data file); PFT #15 will determine PCT_CROP through time.

    To summarize: We'll have the following on the pftdyn dataset:

    • PCT_NATVEG: no time dimension (essentially ignored for transient runs -- that's the whole point of my refactoring)
    • PCT_CROP: varies in time
    • PCT_PFT: varies in time
    • PCT_CFT: no time dimension, at least for now
  • Making sure transient pfts are handled right with respect to crops

    • Make sure that the new formulation creates transient datasets correctly.
      • In particular: it sounds like the transient raw dataset format won't change. So, I think we can interpret the % crop in the transient raw dataset to be the crop landunit area -- and then have the individual crop areas (as % of landunit) stay fixed (see also Surface dataset meeting 6-15-12)

    NOTE: there may be conflicts between this bullet and notes in the previous bullet... need to figure out how exactly we should handle transient pfts now:

    • can we have the case where the static dataset has 24 pfts but transient has 16?
    • If so, confirm how this should be handled

mksurfdata_map raw data grids should just be listed in hgrid, not res

Bill Sacks < [email protected] > - 2013-04-12 06:02:12 -0600
Bugzilla Id: 1661
Bugzilla CC: [email protected],

Currently, it looks like grids for mksurfdata_map raw data files are listed in two places in namelist_definition_clm4_5.xml: the valid values for 'res' and the valid values for 'hgrid'. The fact that they're listed in the valid values for 'res' can be an annoyance when making mapping files with mkmapdata/regridbatch, because this tool then tries to make mapping files TO these raw data resolutions (which is unnecessary).

This annoyance is becoming magnified with my addition of a 1km raw data file, for which mapping files take a long time to generate.

I think these raw data grids should just be listed in hgrid, not res. But I haven't looked closely at this, so I'm not positive about it. I think this will require changing some things in queryDefaultNamelist.

add ability to specify namelist requirements via xml

Bill Sacks < [email protected] > - 2014-06-06 15:27:44 -0600
Bugzilla Id: 1989
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

Erik and I were talking about how it would be nice to be able to specify namelist requirements via an xml file, in order to simplify the error-checking that we currently do in build_namelist.

We could see three possible ways to do this:

(1) Specify requirements (somehow) in the namelist definition xml file

(2) Have a file similar to namelist_defaults, but giving requirements rather than defaults

(3) If a lot of our error-checking is along the lines of:

If variable X=xxx, then variable Y should be yyy, which is its default value

  • Then we could implement this by having something like an "unmodifiable" attribute in the namelist_defaults file.

e.g., rather than

yyy

instead, have:

yyy

If the unmodifiable attribute is set, that means that, not only is this its default value, but also it cannot be modified from this value by the user.

This could possibly be implemented in the add_default call.

duplicate definitions of a few functions in CanopyFluxesMod.F90

Bill Sacks < [email protected] > - 2013-07-23 21:57:40 -0600
Bugzilla Id: 1769
Bugzilla CC: [email protected], [email protected], [email protected],

Starting in clm4_0_67 (Jinyun's changes), there are duplicate definitions of 3 functions: ft, fth, fth25.

It looks like Jinyun turned these into true functions, which is an improvement.

However, the old statement functions were never removed from the top of subroutine Photosynthesis:

real(r8) :: ft ! photosynthesis temperature response (statement function)
real(r8) :: fth ! photosynthesis temperature inhibition (statement function)
real(r8) :: fth25 ! ccaling factor for photosynthesis temperature inhibition (statement function)

...

ft(tl,ha) = exp( ha / (rgas1.e-3_r8(tfrz+25._r8)) * (1._r8 - (tfrz+25._r8)/tl) )
fth(tl,hd,se,cc) = cc / ( 1._r8 + exp( (-hd+setl) / (rgas1.e-3_r8tl) ) )
fth25(hd,se) = 1._r8 + exp( (-hd+se
(tfrz+25._r8)) / (rgas1.e-3_r8(tfrz+25._r8)) )

(Incidentally, I had to go to a fortran reference to even figure out what these lines were doing... now that they have served their educational purpose, I think these old statement functions should be purged.)

I'm guessing that the local statement functions take precedence over the module-level functions, so the module-level functions currently aren't being used. So we should (1) remove the above lines, and then (2) confirm that answers are bfb.

lake depth raw dataset uses the wrong mask

Bill Sacks < [email protected] > - 2013-03-14 15:22:31 -0600
Bugzilla Id: 1642
Bugzilla CC: [email protected], [email protected],

Currently, we have a single 3x3min rawdata file that contains both percent lake and lake depth. This file has two masks: one that is a standard landmask, and one that gives a mask of where we have real lake depth data ("LAKEDATAMASK"). Currently, we are using the same mapping files for both of these fields, which (I assume) use the LANDMASK for a mask. This is a problem because it means that the fill value (10m) currently gets averaged with the real data when doing the regridding.

I believe that the right thing to do would be to create a new set of mapping files that uses LAKEDATAMASK as a mask. Once we do that, we might want to split the raw data file into two separate files, so we maintain the notion of a single mask per raw data file. However, apparently Zack Subin has said that this rigorous handling of the mask isn't totally necessary at this point.

Incidentally: I consider this to be a good example of what a file's mask variable should provide: a mask telling you where there are valid data.

landuse_timeseries_text_files may stomp on each other when making all surface datasets

Bill Sacks < [email protected] > - 2016-01-26 20:25:09 -0700
Bugzilla Id: 2274
Bugzilla CC: [email protected], [email protected],

About a year ago, Ben made some changes to the file names in mksurfdata.pl so that multiple runs wouldn't stomp on each other. However, it looks like the $landuse_timeseries_text_file (set by write_transient_timeseries_file) could have the same file name in multiple runs. From a quick look, I'm guessing that two files with identical names would have the same contents. If that's true, then this is only a problem if you get unlucky in terms of timing: one process is trying to read the file just as another is in the midst of rewriting it, so that the file is incomplete for the read.

The solution could be to add more qualifiers to the name of this file, so that it is unique - similarly to what is done for the other files. (I'm reworking this code significantly in a branch that I'm about to bring to the trunk.) It just isn't totally straightforward to do this, because of the existing logic for this file name.

add namelist option to turn on all history fields, add a CLM test that uses this

Bill Sacks < [email protected] > - 2014-07-31 14:44:46 -0600
Bugzilla Id: 2022
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

You can turn on all history fields by ignoring a few lines of code in histFileMod:

Index: src/clm4_5/main/histFileMod.F90
===================================================================
--- src/clm4_5/main/histFileMod.F90    (revision 61424)
+++ src/clm4_5/main/histFileMod.F90    (working copy)
@@ -4139,9 +4139,6 @@
          p2c_scale_type=scale_type_p2c, c2l_scale_type=scale_type_c2l, l2g_scale_type=scale_type_l2g)

     l_default = 'active'
-    if (present(default)) then
-       l_default = default
-    end if
     if (trim(l_default) == 'inactive') then
        return
     else
@@ -4429,9 +4426,6 @@
          no_snow_behavior=no_snow_behavior)

     l_default = 'active'
-    if (present(default)) then
-       l_default = default
-    end if
     if (trim(l_default) == 'inactive') then
        return
     else

It would be useful to have a namelist option that does this, for testing.

Then we should add a test that uses this.

ncd_io_1d_log_glob is broken

Bill Sacks < [email protected] > - 2014-01-23 11:14:21 -0700
Bugzilla Id: 1906
Bugzilla CC: [email protected], [email protected], [email protected],

When trying to read in logical values with ncd_io in clm4_5_59, I get this message:

(shr_sys_abort) ERROR: ncd_io_1d_log_glob ERROR: read in bad integer value(s) for logical data

It looks like the relevant variable is never set: starting line 1216 in ncdio_pio.F90:

      allocate(idata1d(size(data))) 
      data = (idata == 1)
      if ( any(idata1d /= 0 .and. idata1d /= 1) )then
         call shr_sys_abort( subname// &
              ' ERROR: read in bad integer value(s) for logical data' )
      end if

This worked in clm4_5_52. My guess is that it got broken in clm4_5_53, which created the .in version of ncdio_pio.F90.

c2l_scale_type not specified for many history fields

Bill Sacks < [email protected] > - 2011-08-22 13:49:04 -0600
Bugzilla Id: 1397
Bugzilla CC: [email protected], [email protected], [email protected],

Many history fields do not have a c2l_scale_type parameter (in histFldsMod), but it seems they should. For example, there is a set of water flux variables, starting with QFLX_RAIN_GRND and ending with QFLX_DEW_SNOW, most of which do not have a c2l_scale_type. From talking with Keith Oleson, it seems that at least some and maybe all of these should have c2l_scale_type='urbanf', by analogy with similar fluxes that do have a c2l_scale_type specified.

From talking with Keith Oleson: it sounds like most fluxes should have c2l_scale_type='urbanf', but this isn't necessarily always true. So this will require more investigation to determine the appropriate scale type for each history field.

Most (all?) of the fields that do not have a c2l_scale_type are ones that were added after the urban model came in - for example, fields that were added when the CN code came in. So my guess is that whoever added these fields didn't realize that a c2l_scale_type was required.

After these fields are fixed, perhaps scale_type_c2l should be made a required argument to hist_addfld1d and hist_addfld2d to prevent this problem from arising again in the future.

better error checking needed for copying urban parameters in mksurfdata.pl

Bill Sacks < [email protected] > - 2016-01-27 15:07:02 -0700
Bugzilla Id: 2275
Bugzilla CC: [email protected], [email protected],

For this code in mksurfdata.pl (which was added based on bug 1681):

        #
        # If urban point, overwrite urban variables from previous surface dataset to this one
        #
        if ( $urb_pt && ! $opts{'no_surfdata'} ) {
           my $prvsurfdata = `$scrdir/../../../bld/queryDefaultNamelist.pl $queryopts -var fsurdat`;
           if ( $? != 0 ) {
              die "ERROR:: previous surface dataset file NOT found\n";
           }
           chomp( $prvsurfdata );
           my $varlist = "CANYON_HWR,EM_IMPROAD,EM_PERROAD,EM_ROOF,EM_WALL,HT_ROOF,THICK_ROOF,THICK_WALL,T_BUILDING_MAX,T_BUILDING_MIN,WIND_HGT_CANYON,WTLUNIT_ROOF,WTROAD_PERV,ALB_IMPROAD_DIR,ALB_IMPROAD_DIF,ALB_PERROAD_DIR,ALB_PERROAD_DIF,ALB_ROOF_DIR,ALB_ROOF_DIF,ALB_WALL_DIR,ALB_WALL_DIF,TK_ROOF,TK_WALL,TK_IMPROAD,CV_ROOF,CV_WALL,CV_IMPROAD,NLEV_IMPROAD,PCT_URBAN,URBAN_REGION_ID";
           print "Overwrite urban parameters with previous surface dataset values\n";
           $cmd = "ncks -A -v $varlist $prvsurfdata $fsurdat_fname";
           print "$cmd\n";
           if ( ! $opts{'debug'} ) { system( $cmd ); }
        }

I discovered โ€“ the hard way โ€“ that there is no error checking on the ncks command. I had been iteratively making new surface datasets and updating the namelist_defaults, and apparently one time I ran this not all of the surface datasets in namelist_defaults actually existed in the inputdata space. So the ncks command failed.

The big problem here is that, when this fails (or even if the 'die' near the start of this block is executed), the surface datasets are left there. So, unless you look carefully at the output of the mksurfdata.pl script, it looks like the surface datasets were created correctly, when in fact all of these urban fields are wrong.

So: I propose that, if either (a) the 'die' in this block is executed, or (b) the ncks command fails, then the surface dataset should be removed โ€“ or maybe renamed to something like "$fsurdat_fname.INCOMPLETE"... in any case, something that will prevent it from being used accidentally.

duplicate CLM history fields, both on by default

Bill Sacks < [email protected] > - 2014-07-09 16:24:28 -0600
Bugzilla Id: 2011
Bugzilla Depends: 1856,
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

In working out some differences in Mariana's refactor branch, I discovered that there are a number of history fields that refer to the exact same thing but with a different name. The two I noticed are:

   call hist_addfld1d (fname='TOTLITC', units='gC/m^2', &
        avgflag='A', long_name='total litter carbon', &
        ptr_col=this%totlitc_col)
   call hist_addfld1d (fname='LITTERC', units='gC/m^2', &
        avgflag='A', long_name='litter C', &
        ptr_col=this%totlitc_col)

and:

   call hist_addfld1d (fname='TOTSOMC', units='gC/m^2', &
        avgflag='A', long_name='total soil organic matter carbon', &
        ptr_col=this%totsomc_col)
   call hist_addfld1d (fname='SOILC', units='gC/m^2', &
        avgflag='A', long_name='soil C', &
        ptr_col=this%totsomc_col)

The code had this comment:

   ! add history fields for all CLAMP CN variables

So presumably the variables needed to be renamed for CLAMP? But at the very least, it seems like only one of each of these should be on by default.

max daylength is hard-coded for present-day orbital parameters

Bill Sacks < [email protected] > - 2013-10-16 09:49:44 -0600
Bugzilla Id: 1843
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected],

In doing my daylength refactor (https://svn-ccsm-models.cgd.ucar.edu/clm2/branches/daylength_refactor/), I discovered that max daylength is hard-coded for present-day orbital parameters. In talking with Nan, this could be a problem for present-day runs.

It is easiest to see the problem on the above-referenced branch (which will come to the trunk soon).

I believe that the daylength calculation (in models/lnd/clm/src/clm4_5/biogeophys/DaylengthMod.F90 - see the daylength function) is agnostic to orbital parameters, so that should be fine - as long as it is getting the true, current solar declination angle, which I believe is the case.

The problem is in src/clm4_5/main/iniTimeConst.F90 - see the setting of max_decl there. There are actually two related problems here:

(1)

Clean up use and implementation of accumulMod

Bill Sacks < [email protected] > - 2014-08-05 15:32:30 -0600
Bugzilla Id: 2023
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

Currently, on Mariana's refactor branch, there is a lot of verbosity and repeated boilerplate code needed in the InitAccVars routines, which initialize variables associated with time-accumulated fields. For example:

    begp = bounds%begp; endp = bounds%endp

    ! Allocate needed dynamic memory for single level pft field
    allocate(rbufslp(begp:endp), stat=ier)
    if (ier/=0) then
       write(iulog,*)' in '
       call endrun(msg=" allocation error for rbufslp"//&
            errMsg(__FILE__, __LINE__))
    endif

    nstep = get_nstep()

    call extract_accum_field ('AGDDTW', rbufslp, nstep) 
    this%agddtw_patch(begp:endp) = rbufslp(begp:endp)

    call extract_accum_field ('AGDD', rbufslp, nstep) 
    this%agdd_patch(begp:endp) = rbufslp(begp:endp)

    deallocate(rbufslp)

Could this be replaced by simply the following?:

    call extract_accum_field('AGDDTW', this%agddtw_patch(bounds%begp:bounds%endp))

    call extract_accum_field('AGDD', this%agdd_patch(begp:endp))

The two major changes here are:

(1) the extract_accum_field routine gets nstep, rather than requiring the caller to obtain and pass in this value.

(2) I avoid the use of rbufslp - which may be needed in some cases, but doesn't appear to be needed here.

Edit (2019-11-15): It looks like rbufslp and similar variables are used in order to pass in pointers with particular lower bounds. I think this isn't needed with the current setup of having these routines always called outside a clump loop. But even if this is needed (e.g., to allow calling these routines from inside a clump loop), I think we could avoid this with a mechanism like what's suggested in #30 (comment).

rework or remove max_patch_per_col

Bill Sacks < [email protected] > - 2015-10-04 14:41:13 -0600
Bugzilla Id: 2227
Bugzilla CC: [email protected], [email protected],

Currently in clm_varpar, we have:

max_patch_per_col= max(numpft+1, numcft, maxpatch_urb)

This is used in loops in the code like this:

do pi = 1,max_patch_per_col
   do j = 1,nlevsoi
      do fc = 1, num_hydrologyc
         c = filter_hydrologyc(fc)
         if (pi <= col%npatches(c)) then

However: Using numcft in this 'max' gives a significant overestimate of max_patch_per_col when use_crop is true. This should be reworked - or, better, removed from the code entirely (because it is a maintenance problem, and I can't imagine that looping idioms that use it help performance that much, and likely they hurt performance - at least when it is overestimated by so much.)

Loops like this could be reworked to avoid needing max_patch_per_col by either:

(1) Looping over patches, and finding the corresponding column with patch%col(p)

(2) Looping over columns, then looping from begp to endp in an inner loop. This would likely be less vectorization-friendly, but you save many unnecessary loop iterations, so it's probably not likely to hurt performance much, and may help it.

ED runs die in PIO while trying to read params file, starting with cime3

Bill Sacks < [email protected] > - 2015-09-29 15:40:54 -0600
Bugzilla Id: 2221
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected],

With the updates to cime3 (planned for clm4_5_3_r131), these two ED runs now die while trying to read the params file:

SMS_Ld5.f10_f10.ICLM45ED.yellowstone_pgi.clm-edTest
SMS_Ld5.f19_g16.ICLM45ED.yellowstone_intel.clm-edTest

The deaths look like:

1: Opened existing file
1: /glade/p/cesmdata/cseg/inputdata/lnd/clm2/paramdata/clm_params_ed.c150317.nc
1: 0
[then many instances of the following:]
9: pio_support::pio_die:: myrank= -1 : ERROR: pionfget_mod.F90:
9: 426 : Input/Output data amount mismatch
41:Image PC Routine Line Source
41:cesm.exe 000000000144CC38 Unknown Unknown Unknown
41:cesm.exe 0000000000D705D1 pio_support_mp_pi 120 pio_support.F90
41:cesm.exe 0000000000D6E645 pio_utils_mp_chec 59 pio_utils.F90
41:cesm.exe 0000000000E27DDD pionfget_mod_mp_g 426 pionfget_mod.F90.in
41:cesm.exe 00000000005A859D ncdio_pio_mp_ncd_ 1202 ncdio_pio.F90.in
41:cesm.exe 0000000000AFBBB2 paramutilmod_mp_r 50 paramUtilMod.F90
41:cesm.exe 00000000007BC556 edparamsmod_mp_ed 106 EDParamsMod.F90
41:cesm.exe 00000000005CE868 readparamsmod_mp_ 69 readParamsMod.F90
41:cesm.exe 0000000000508628 clm_initializemod 355 clm_initializeMod.F90
41:cesm.exe 00000000004F7DC8 lnd_comp_mct_mp_l 232 lnd_comp_mct.F90
41:cesm.exe 00000000004262AB component_mod_mp_ 229 component_mod.F90
41:cesm.exe 0000000000415D38 cesm_comp_mod_mp_ 1086 cesm_comp_mod.F90
41:cesm.exe 00000000004213ED MAIN__ 92 cesm_driver.F90

I wonder if this is showing up for the first time now because one of the changes in the cime3 update was:

NEW: pio_typename = 'pnetcdf'
BASELINE: pio_typename = 'netcdf'

i.e., could the use of pnetcdf be revealing a problem that until now went unnoticed?

each patch / column / landunit type index should have unique meaning

Bill Sacks < [email protected] > - 2015-09-18 12:30:09 -0600
Bugzilla Id: 2218
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

I feel that each patch / column / landunit type index should have unique meaning. For example, we currently use noveg (0) for the patch type for the bare ground patch on the natural veg landunit, as well as for all patches on special landunits. For column index, I think we're doing pretty well now, although we should confirm that there aren't any indices that are used on multiple landunits.

This would help in two ways:

(1) It would make it easier to process the 1-d (vector) history output

(2) It would allow for more straightforward logic in initInterp. For example, when checking is_sametype for patches, we could just check whether the patch type is the same, without having to also check the column and landunit type - because an identical patch type would imply identical column and landunit type, too. (In the case of generic crop, which can appear on either the natural veg landunit or the crop landunit, identical patch type doesn't imply identical col/landunit type, but in that case, that's what we want: i.e., we just want to find the same patch type without regards for what landunit type it's on.)

BTRAN should be spval over all special landunits

Bill Sacks < [email protected] > - 2014-06-25 20:00:47 -0600
Bugzilla Id: 2002
Bugzilla CC: [email protected], [email protected], [email protected],

I noticed that the BTRAN history field is spval over urban and lake landunits (as it should be). However, it is 0 over glacier / glacier_mec, and also 0 over wetland. I'm not sure if its value over wetland matters very much (because I'm thinking that a grid cell can never have a mix of wetland and anything else; is that true?), but its 0 values over glacier means: If a grid cell has a mix of natural vegetation and glacier, its grid cell-average BTRAN will be wrong because of these 0 values that are averaged in.

I bet a similar problem exists for other history fields; I just happened to notice it for BTRAN.

re-evaluate pnetcdf for 1-d fields with pio2

Bill Sacks < [email protected] > - 2015-06-16 10:32:47 -0600
Bugzilla Id: 2185
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected],

Once we start using PIO2 in CLM, we should re-evaluate bug 1730.

In particular, we should try backing out the workaround that was put in place in clm4_5_1_r091, which involved avoiding pnetcdf for 1-d (dov2xy = .false.) fields. Then we should see if the problem documented in bug 1730 still exists.

If it seems to be resolved with pio2, we should back out the workaround.

default finidat_interp_dest file name should have instance number

Bill Sacks < [email protected] > - 2016-02-26 10:49:37 -0700
Bugzilla Id: 2289
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected],

Kevin Raeder pointed out that the default file name for finidat_interp_dest ('finidat_interp_dest.nc') leads init_interp to stomp on itself when using use_init_interp with multi-instance. We should change the default to have the instance number in the file name.

In the meantime, the workaround is to explicitly specify finidat_interp_dest in each instance's user_nl_clm.

check for LSF_PJL_TYPE in regridbatch.sh doesn't work correctly

Bill Sacks < [email protected] > - 2014-01-22 17:01:23 -0700
Bugzilla Id: 1904
Bugzilla CC: [email protected], [email protected],

This is from the cesm1.2.0 release, but I think this is the same in the latest CLM trunk:

It seems that the check for LSF_PJL_TYPE in tools/shared/mkmapdata/regridbatch.sh is no longer working correctly - i.e.:

   if [ ! -z $LSF_PJL_TYPE ]; then
   cmdargs="$cmdargs -b"
   fi

When I submitted this as a batch job, I got an error message:

ERROR: Program was not launched by POE. Abort!!!

Removing the above lines allows this to work, but that's not a good, general solution.

It is possible that this was user error on my part....

As a reminder, we want to move to using the shared mapping tools (bug 1683), which might make this problem go away (???).

ch4 placement in driver will cause problems if BeTR is used for nitrogen

Bill Sacks < [email protected] > - 2016-02-22 10:27:57 -0700
Bugzilla Id: 2288
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected], [email protected],

The fix for bug 2225 and bug 2287, brought to the trunk in clm4_5_8_r168 (which returns the call to ch4 to where it was in clm4_5_12 and earlier), will cause problems if BeTR is ever used for nitrogen.

Here is an email from Jinyun about this:

Too bad that move caused the travel. My original intention was to make a
consistent tracking of nitrogen leaching in betr with hydrology code with the
active ch4 cycle. As it is now in tag 168, a serious inconsistency would arise
if one use betr to track nitrogen cycle with active ch4 code. However, since
there is no intent to do active bgc in betr, the flip back is OK. Yet, I do
suspect this flip back will cause trouble down the road if people intend to
use betr for carbon isotope transport, even though I personally will not do
it. Therefore, my suggestion is either to separate the set value stuff for
ch4, so it will not zero out methane flux or indicate in somewhere that
caution against people to use betr for purpose other than water isotopes (this
basically push the problem to the future, and someone has to implement the
separation strategy again).

Without having a very good understanding of this, I agree that separating the set value stuff for ch4 seems like the best solution here. Once that is done, the change to the driver sequencing in clm4_5_8_r168 should be reverted.

min & max don't work correctly for some history fields

Bill Sacks < [email protected] > - 2013-12-17 11:26:23 -0700
Bugzilla Id: 1884
Bugzilla CC: [email protected], [email protected],

The 'min' and 'max' functions don't work correctly for some history fields. Specifically, this seems to apply to fields that take on spval at some points in time, but not others. This definitely applies to the multi-layer snow fields that I'm adding; I'm not sure if it applies to any others.

This problem seems to be in clm4.0 and clm4.5, although I have only witnessed it in CLM4.5.

The fix is as shown at the bottom of the bug report; I have tested this for my new snow fields (exercising the clm4.5 code, with 2-d fields averaged to the grid cell level). Note that the diffs below just fix the problem in one place. This problem occurs in at least 3 other places in the clm4.5 code, and a similar number of places in the clm4.0 code (and maybe more; I didn't look carefully - search for nacs in the code). The main reason I'm not just making the fix right now is that I don't have time to test this fix in all the places where it occurs. This could actually be a good excuse to consolidate some of the duplicated code in histFileMod - i.e., consolidate the 4 (or more) places where the average / min / max logic is duplicated (with small variations).

Note that I am planning to add min/max versions of the snow history fields to some of the CLM tests. If I do this, then I would expect this bug fix to change answers for these min/max snow history fields (in particular, I expect to see a FILLDIFF in cprnc).

Here are the diffs that fix the problem in one place:

Index: main/histFileMod.F90
===================================================================
--- main/histFileMod.F90	(revision 56050)
+++ main/histFileMod.F90	(working copy)
@@ -1391,10 +1391,10 @@
                 if (field_gcell(k,j) /= spval) then
                    if (nacs(k,j) == 0) hbuf(k,j) = -1.e50_r8
                    hbuf(k,j) = max( hbuf(k,j), field_gcell(k,j) )
+                   nacs(k,j) = 1
                 else
-                   hbuf(k,j) = spval
+                   if (nacs(k,j) == 0) hbuf(k,j) = spval
                 endif
-                nacs(k,j) = 1
              end do
           end do
        case ('M') ! Minimum over time
@@ -1403,10 +1403,10 @@
                 if (field_gcell(k,j) /= spval) then
                    if (nacs(k,j) == 0) hbuf(k,j) = +1.e50_r8
                    hbuf(k,j) = min( hbuf(k,j), field_gcell(k,j) )
+                   nacs(k,j) = 1
                 else
-                   hbuf(k,j) = spval
+                   if (nacs(k,j) == 0) hbuf(k,j) = spval
                 endif
-                nacs(k,j) = 1
              end do
           end do
        case default

Change urban code to use downscaled atmospheric forcings

Bill Sacks < [email protected] > - 2016-11-15 05:04:24 -0700
Bugzilla Id: 2377
Bugzilla CC: [email protected], [email protected],

Most of the code uses the downscaled, column-level atmospheric forcings where relevant. However, the urban code still uses the gridcell-level, non-downscaled forcings in some places (search for 'not_downscaled' in the urban modules). This is simply because, when I was originally changing the code to use the downscaled forcings, it looked like it would be a fair amount of work to change the urban code to be fully consistent, so I punted on that.

This causes problems if you extend either the urban domain or the CISM domain so that the two overlap in any grid cells - i.e., if there are any active urban points in the CISM domain.

Once this is done, we can:

(1) Remove the check for column-gridcell equality in atm2lndMod: check_downscale_consistency

(2) Remove the code that I'm about to add to glc2lndMod: update_glc2lnd_topo that prevents downscaling over urban points

Better error message when trying to use init_interp to interpolate between clm45 and clm50

Bill Sacks < [email protected] > - 2015-11-04 06:17:52 -0700
Bugzilla Id: 2240
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected],

When trying to interpolate clm45 initial conditions to a clm50 case, init_interp dies because URBAN_AC is dimensioned by column in CLM45 and by landunit in CLM50.

I can see three possible solutions:

(1) Live with the fact that we cannot interpolate initial conditions files between CLM45 and CLM50 cases.

(2) Give URBAN_AC a different name on the restart file depending on whether it is at the column-level or landunit-level. In the future, something like this would probably be best: If fundamental changes are made to a restart variable, then its name should change to avoid breaking init_interp. We could do something like this retroactively, possibly combined with the feature I have added to allow listing multiple possible names for a restart variable.

(3) Change init_interp to make it skip a variable when the dimensions differ in the input and output, rather than aborting. This is probably the easiest solution in the short-term, but I worry a bit that this could make init_interp too permissive, doing the wrong thing (i.e., leaving values at their cold start values) when it should be aborting with an error.

add meta-types for landunit types

Bill Sacks < [email protected] > - 2012-03-22 16:20:18 -0600
Bugzilla Id: 1483
Bugzilla CC: [email protected],

There are lots of checks in the code that look like this:

if (ltype(l) == istsoil .or. ltype(l) == istcrop)

or:

if (ltype(l)==istice .or. ltype(l)==istice_mec) then

or:

elseif ( itypelun(l) == istdlak .or. itypelun(l) == istslak ) then

I would propose adding some logical variables like:

is_vegetated(l) (true for soil & crop landunits)

is_ice(l) (true for ice & ice_mec landunits)

is_lake(l) (true for deep & shallow lake)

This would have these advantages:

  • more easily extensible: e.g., could add a new vegetated landunit without having to change conditionals that occur throughout the code

  • safer: e.g., I just found an instance where a conditional checked ltype(l)==istice, forgetting to also check the possibility of istice_mec

refactor code that assumes a particular ordering of PFT type constants

Bill Sacks < [email protected] > - 2013-12-23 12:57:11 -0700
Bugzilla Id: 1886
Bugzilla CC: [email protected], [email protected], [email protected], [email protected], [email protected],

Here's a fun thought experiment: How much of CLM would break if you changed the numbering of these variables in pftvarcon:

integer :: noveg !value for not vegetated
integer :: ndllf_evr_tmp_tree !value for Needleleaf evergreen temperate tree
integer :: ndllf_evr_brl_tree !value for Needleleaf evergreen boreal tree
integer :: ndllf_dcd_brl_tree !value for Needleleaf deciduous boreal tree
integer :: nbrdlf_evr_trp_tree !value for Broadleaf evergreen tropical tree
integer :: nbrdlf_evr_tmp_tree !value for Broadleaf evergreen temperate tree
integer :: nbrdlf_dcd_trp_tree !value for Broadleaf deciduous tropical tree
integer :: nbrdlf_dcd_tmp_tree !value for Broadleaf deciduous temperate tree
integer :: nbrdlf_dcd_brl_tree !value for Broadleaf deciduous boreal tree
integer :: ntree !value for last type of tree
integer :: nbrdlf_evr_shrub !value for Broadleaf evergreen shrub
integer :: nbrdlf_dcd_tmp_shrub !value for Broadleaf deciduous temperate shrub
integer :: nbrdlf_dcd_brl_shrub !value for Broadleaf deciduous boreal shrub
integer :: nc3_arctic_grass !value for C3 arctic grass
integer :: nc3_nonarctic_grass !value for C3 non-arctic grass
integer :: nc4_grass !value for C4 grass

I don't know the answer to this question, but I know the answer is non-zero. Here are some examples, from a quick search:

From pftdynMod:

  ! If this is a tree pft, then
  ! get the annual harvest "mortality" rate (am) from harvest array
  ! and convert to rate per second
  if (ivt(p) > noveg .and. ivt(p) < nbrdlf_evr_shrub) then

From CNFireMod:

          ! For crop veg types
          if( pft%itype(p) > nc4_grass )then
             cropf_col(c) = cropf_col(c) + pft%wtcol(p)
          end if
          ! For natural vegetation (non-crop)
          if( pft%itype(p) >= ndllf_evr_tmp_tree .and. pft%itype(p) <= nc4_grass )then
             lfwt(c) = lfwt(c) + pft%wtcol(p)
          end if

From CNVegStructUpdateMod:

         ! if shrubs have a squat taper 
         if (ivt(p) >= nbrdlf_evr_shrub .and. ivt(p) <= nbrdlf_dcd_brl_shrub) then
            taper = 10._r8

And a corollary question: How much of CLM SHOULD break when you change the ordering of these? I would argue strongly that the answer to this question SHOULD BE ZERO.

The problems I see with usages like the above examples are:

(1) If we change the ordering of these pfts in the future, or remove any pfts, we'll break a lot of code

(2) More importantly: If a user naively tries to add a new pft, they are likely to inadvertently break code in hard-to-detect ways. For example, if someone naively added a new shrub type before nbrdlf_evr_shrub, it would be treated as a tree by the above code in pftdynMod.

What should be done? I'd say that the only conditional that should be allowed involving ivt is equality: no checks of less than something or greater than something. To enable logic like the above, there should be additional metadata associated with each pft, like is_tree, is_crop, is_natveg, is_shrub, etc.

lots of duplicated code in mksurfdata_map

Bill Sacks < [email protected] > - 2013-02-27 11:37:02 -0700
Bugzilla Id: 1633
Bugzilla CC: [email protected], [email protected], [email protected],

There is a lot of duplicated code in mksurfdata_map. There is duplication between files (fields that are regridded in a simple way have modules that duplicate nearly all code from other modules), as well as within certain files (e.g., mkfileMod).

This tool is due for a refactoring to remove some of this duplication.

dynamic root code divides by 0

Bill Sacks < [email protected] > - 2015-10-31 05:15:52 -0600
Bugzilla Id: 2237
Bugzilla CC: [email protected], [email protected], [email protected], [email protected],

Runs that activate both use_dynroot and crop crash in the first time step if run in debug mode. This is due to a divide by 0 error in this line in CNRootDynMod:

                root_depth(p) = max(zi(c,2), min(hui(p)/huigrain(p)* root_dmx(ivt(p)), root_dmx(ivt(p))))

huigrain(p) can sometimes be 0; in the one case I looked at, hui was also 0. Also, in the first timestep of a cold start, huigrain is NaN.

So this needs:

(1) initialization of huigrain in cold start so that it isn't NaN: it should probably be initialized to 0

(2) appropriate handling of the case when huigrain = 0

albgrd and albgri history fields depend on decomposition, for urban points

Bill Sacks < [email protected] > - 2013-08-31 07:26:34 -0600
Bugzilla Id: 1806
Bugzilla CC: [email protected], [email protected], [email protected],

I believe that this problem only affects history fields, and doesn't actually affect the operation of the model. This is a restatement of the initial bug in bug 1310 (most of the comments in that bug report really relate to a different bug).

For urban points, albgrd and albgri depend on the decomposition - either the number of tasks or the number of threads per task.

I believe that what is going on is the following (copied from bug 1310; I haven't checked carefully whether the behavior has changed slightly from this):

(1) In UrbanMod.F90: UrbanAlbedo: A count is made of the number of urban
landunits with coszen > 0 (num_solar); note that this count is just of the
number of landunits that this processor is responsible for; thus, this is where
the # PE-dependence comes in, I think.

(2) Later in that subroutine, a bunch of calculations are done if num_solar > 0
-- i.e., if this PE is responsible for at least one urban landunit with coszen

  1. Note that many of these calculations are done for all landunits, even ones
    for which coszen = 0. This introduces the possibility for different results
    depending on the decomposition.

(3) The particular difference that I am seeing is in albgrd & albgri. These are
initialized to 0 at the start of the subroutine, and so remain 0 on any PE for
which num_solar = 0. However, for PEs with num_solar > 0, landunits that have
coszen = 0 end up getting albgrd = albgri = 1. This is because the calculation
of albgrd & albgri depends on the values of the sref_* variables, which are
initialized to 1 (and stay at 1 for any landunit for which coszen = 0).

I have confirmed this by comparing the following from clm4_5_20: SMS_Lm1_P180x1.f19_g16.ICLM45BGC.yellowstone_intel vs SMS_Lm1_P360x1.f19_g16.ICLM45BGC.yellowstone_intel - both with ALBGRD, ALBGRI, ALBD and ALBI added to history output.

One thing I am confused about is whether this problem occurs from albgrd and albgri, but not for albd and albi - since the latter seem to just be copies of the former in UrbanMod.

Some clm test namelists have empty field lists for 1-d history output

Bill Sacks < [email protected] > - 2012-01-21 19:38:36 -0700
Bugzilla Id: 1454
Bugzilla Depends: 1455,
Bugzilla CC: [email protected],

Some of the namelists in test/system/nl_files have hist_dov2xy for a history file, but do not add any fields to this file, so there aren't actually any 1-d comparisons done.

For example, nl_crop and nl_cn_conly. These namelists could be changed according to the following diffs; note that these field lists are basically the same as what's currently used in nl_urb, which correctly includes 1-d output:

Index: nl_crop
===================================================================
--- nl_crop     (revision 31723)
+++ nl_crop     (working copy)
@@ -9,6 +9,24 @@
  hist_fincl1    = 'GDD0', 'GDD8', 'GDD10', 
                   'GDD020', 'GDD820', 'GDD1020',
                   'GDDPLANT', 'GDDTSOI', 'A5TMIN', 'A10TMIN'
+ hist_fincl2    = 'TG','TBOT','FIRE','FIRA','FLDS','FSDS',
+                  'FSR','FSA','FGEV','FSH','FGR','TSOI',
+                  'ERRSOI','BUILDHEAT','SABV','SABG',
+                  'FSDSVD','FSDSND','FSDSVI','FSDSNI',
+                  'FSRVD','FSRND','FSRVI','FSRNI',
+                  'TSA','FCTR','FCEV','QBOT','RH2M','H2OSOI',
+                  'H2OSNO','SOILLIQ','SOILICE', 
+                  'TSA_U', 'TSA_R',
+                  'TREFMNAV_U', 'TREFMNAV_R',
+                  'TREFMXAV_U', 'TREFMXAV_R',
+                  'TG_U', 'TG_R',
+                  'RH2M_U', 'RH2M_R',
+                  'QRUNOFF_U', 'QRUNOFF_R',
+                  'SoilAlpha_U',
+                  'Qanth', 'SWup', 'LWup', 'URBAN_AC', 'URBAN_HEAT',
+                  'GDD0', 'GDD8', 'GDD10', 
+                  'GDD020', 'GDD820', 'GDD1020',
+                  'GDDPLANT', 'GDDTSOI', 'A5TMIN', 'A10TMIN'
  /
  ! Adding the following variable causes the model to abort in debug mode
  ! with multiplying by a NaN, bugzilla bug 1325. EBK Apr/28/2011
Index: nl_cn_conly
===================================================================
--- nl_cn_conly (revision 31723)
+++ nl_cn_conly (working copy)
@@ -8,4 +8,19 @@
  hist_dov2xy    = .true.,.false.
  hist_ndens     = 1,1
  hist_fincl1    = 'TRAFFICFLUX', 'SNOWLIQ:A','SNOWICE:A'
+ hist_fincl2    = 'TG','TBOT','FIRE','FIRA','FLDS','FSDS',
+                  'FSR','FSA','FGEV','FSH','FGR','TSOI',
+                  'ERRSOI','BUILDHEAT','SABV','SABG',
+                  'FSDSVD','FSDSND','FSDSVI','FSDSNI',
+                  'FSRVD','FSRND','FSRVI','FSRNI',
+                  'TSA','FCTR','FCEV','QBOT','RH2M','H2OSOI',
+                  'H2OSNO','SOILLIQ','SOILICE', 
+                  'TSA_U', 'TSA_R',
+                  'TREFMNAV_U', 'TREFMNAV_R',
+                  'TREFMXAV_U', 'TREFMXAV_R',
+                  'TG_U', 'TG_R',
+                  'RH2M_U', 'RH2M_R',
+                  'QRUNOFF_U', 'QRUNOFF_R',
+                  'SoilAlpha_U',
+                  'Qanth', 'SWup', 'LWup', 'URBAN_AC', 'URBAN_HEAT'
  /

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.