Giter Site home page Giter Site logo

rubisco-sfa / ilamb Goto Github PK

View Code? Open in Web Editor NEW
41.0 7.0 34.0 1.87 MB

Python software used in the International Land Model Benchmarking (ILAMB) project

License: BSD 3-Clause "New" or "Revised" License

Python 99.80% Makefile 0.08% Shell 0.12%
model benchmarking python3 earth-system-model

ilamb's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

ilamb's Issues

`FileNotFound` error in v2.7. but not v2.6

I'm getting FileNotFoundError: [Errno 2] No such file or directory: '/lcrc/group/e3sm/diagnostics/ilamb_data/DATA/mrro/Dai/basins_0.5x0.5.nc' when running ILAMB v2.7, but not ILAMB v2.6.

Is there a way to build a development environment for ILAMB (i.e., build ILAMB off an arbitrary commit rather than using an officially released version)? I was hoping to use git bisect to identify when this error was introduced and ultimately if it is an issue with ILAMB itself or in how I am using ILAMB.

Full stack trace:

Traceback (most recent call last):
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.2rc2_chrysalis/bin/ilamb-run", line 993, in <module>
    S = Scoreboard(
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.2rc2_chrysalis/lib/python3.10/site-packages/ILAMB/Scoreboard.py", line 508, in _\
_init__
    TraversePreorder(self.tree, _initConfrontation)
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.2rc2_chrysalis/lib/python3.10/site-packages/ILAMB/Scoreboard.py", line 124, in T\
raversePreorder
    TraversePreorder(child, visit)
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.2rc2_chrysalis/lib/python3.10/site-packages/ILAMB/Scoreboard.py", line 124, in T\
raversePreorder
    TraversePreorder(child, visit)
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.2rc2_chrysalis/lib/python3.10/site-packages/ILAMB/Scoreboard.py", line 124, in T\
raversePreorder
    TraversePreorder(child, visit)
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.2rc2_chrysalis/lib/python3.10/site-packages/ILAMB/Scoreboard.py", line 122, in T\
raversePreorder
    visit(node)
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.2rc2_chrysalis/lib/python3.10/site-packages/ILAMB/Scoreboard.py", line 470, in _\
initConfrontation
    node.confrontation = Constructor(**(node.__dict__))
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.2rc2_chrysalis/lib/python3.10/site-packages/ILAMB/ConfTWSA.py", line 37, in __in\
it__
    self.basins = r.addRegionNetCDF4(
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.2rc2_chrysalis/lib/python3.10/site-packages/ILAMB/Regions.py", line 139, in addR\
egionNetCDF4
    dset = Dataset(filename)
  File "src/netCDF4/_netCDF4.pyx", line 2464, in netCDF4._netCDF4.Dataset.__init__
  File "src/netCDF4/_netCDF4.pyx", line 2027, in netCDF4._netCDF4._ensure_nc_success
FileNotFoundError: [Errno 2] No such file or directory: '/lcrc/group/e3sm/diagnostics/ilamb_data/DATA/mrro/Dai/basins_0.5x0.5.nc'

Providing the list of complete reproduction steps is somewhat complicated, since it comes up in testing the zppy (https://github.com/E3SM-Project/zppy) post-processing package. The error doesn't come up when I set the zppy test to use a version of E3SM Unified with ILAMB v2.6, but it does when I use a version with ILAMB v2.7.

underflow encountered in double_scalars encountered in ConfPermafrost.py

I'm using ILAMB2.7.
In a run with two models I'm getting the following error in the Permafrost confrontation for one of the models:

[DEBUG][5][WorkConfront][Permafrost/Obu2018][SturmSnowtk]
Traceback (most recent call last):
File "/glade/p/cesm/lmwg/diag/ILAMB/CODE/ilamb/bin/ilamb-run", line 561, in WorkConfront
c.confront(m)
File "/glade/work/oleson/conda-envs/ilamb_lmwg/lib/python3.7/site-packages/ILAMB/ConfPermafrost.py", line 276, in confront
score_missed[ptype] = both[ptype] / (both[ptype] + missed[ptype])
File "/glade/work/oleson/conda-envs/ilamb_lmwg/lib/python3.7/site-packages/numpy/ma/core.py", line 4197, in truediv
return true_divide(self, other)
File "/glade/work/oleson/conda-envs/ilamb_lmwg/lib/python3.7/site-packages/numpy/ma/core.py", line 1166, in call
m |= domain(da, db)
File "/glade/work/oleson/conda-envs/ilamb_lmwg/lib/python3.7/site-packages/numpy/ma/core.py", line 853, in call
return umath.absolute(a) * self.tolerance >= umath.absolute(b)
FloatingPointError: underflow encountered in double_scalars

As noted, Line 276 in ConfPermafrost.py is:

        score_missed[ptype] = both[ptype] / (both[ptype] + missed[ptype])

So I've printed out the variable values entering into that calculation using:
print("ptype: ",ptype)
print("missed: ",missed[ptype])
print("both: ",both[ptype])
print("both+missed: ",both[ptype]+missed[ptype])

The values are:
ptype: d
missed: 3.1715914949311625
both: 0.5357629054099063
both+missed: 3.707354400341069

I don't see any problem with those values in that calculation and I don't see why there would be an underflow.

Interestingly, the other model works fine and the calculation is completed successfully and the values are:

ptype: d
missed: 2.692865569871369
both: 1.73930589189917
both+missed: 4.432171461770539
both/both+missed: 0.3924274832103093
score_missed: 0.3924274832103093

Any ideas? Maybe I'm not printing out the variables I think I am?

`register_cmap()` got an unexpected keyword argument 'data'

ILAMB appears not to be compatible with matplotlib >= 3.4.0. As an example, see this CI build on conda-forge.

Traceback (most recent call last):
  File "/home/conda/feedstock_root/build_artifacts/ilamb_1620761448485/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/bin/ilamb-run", line 20, in <module>
    if "stoplight" not in plt.colormaps(): RegisterCustomColormaps()
  File "/home/conda/feedstock_root/build_artifacts/ilamb_1620761448485/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/lib/python3.9/site-packages/ILAMB/Post.py", line 1030, in RegisterCustomColormaps
    plt.register_cmap(name='stoplight', data=RdYlGn)
TypeError: register_cmap() got an unexpected keyword argument 'data'

This feature was apparently deprecated 1 1/2 years ago: matplotlib/matplotlib#15875 in 3.3.0 and was dropped in 3.4.0.

Can custom confrontations be used from configuration script?

The tutorial page on custom confrontations does not indicates if it can be used in a configuration script (such as sample.cfg from the example: https://www.ilamb.org/Downloads/minimal_ILAMB_data.tgz)

Diving into the code, it seems that Scoreboard is set up for constructing custom confrontation objects, but there doesn't appear to be a way to specify the confrontation type (ctype) of a configuration script node.
By default, a node is given the ctype value None, which is then inherited from it's parent nodes (via InheritVariableNames), but there is nothing that reads (specifically, ParseScoreboardConfigureFile does not read anything) from the configuration script and assigns it to a ctype member.

Is it currently possible to use a custom confrontation from a configuration script?

No spatial integrated regional mean for regions other than global

I use the ILAMB sample data from the tutorial to run the region for 'shsa, Southern Hemisphere South America', however there is no spatial integrated region mean in the output, it simply said 'Data not available'. Can anyone help to have a look? Here is my code to run:

ilamb-run --config sample.cfg --model_root $ILAMB_ROOT/ILAMB_sample/MODELS/ --regions global shsa

Thanks a lot.

Errors associated with FLUXCOM/reco.nc and FLUXCOM/gpp.nc

I'm getting errors associated with new versions of FLUXCOM/reco.nc and FLUXCOM/gpp.nc datasets. I see that the reco and gpp variables in these files are ordered (lat,lon,time) instead of (time,lat,lon), e.g.,

    float gpp(lat, lon, time) ;
            gpp:_FillValue = nanf ;
            gpp:long_name = "GPP" ;
            gpp:units = "g m-2 day-1" ;

I'm wondering if this is what is causing the error. An example error is:

[DEBUG][3][WorkConfront][EcosystemRespiration/FLUXCOM][CLM50] Traceback (most recent call last): File "/glade/campaign/cesm/community/lmwg/diag/ILAMB/CODE/ilamb/bin/ilamb-run", line 561, in WorkConfront c.confront(m) File "/glade/work/oleson/conda-envs/ilamb_casper_lmwg/lib/python3.7/site-packages/ILAMB/Confrontation.py", line 431, in confront obs, mod = self.stageData(m) File "/glade/work/oleson/conda-envs/ilamb_casper_lmwg/lib/python3.7/site-packages/ILAMB/Confrontation.py", line 376, in stageData lons=None if obs.spatial else obs.lon, File "/glade/work/oleson/conda-envs/ilamb_casper_lmwg/lib/python3.7/site-packages/ILAMB/ModelResult.py", line 345, in extractTimeSeries assert lats.shape == lons.shape AssertionError

AttributeError: module 'matplotlib.cm' has no attribute 'cmap_d'

There is an AttributeError: module 'matplotlib.cm' has no attribute 'cmap_d' when running ILAMB v2.6. The relevant log I have as follows:

[INFO][0][confront][LeafAreaIndex/eafAreaIndex/AVHRR][20231012.v3alpha04_trigrid_bgc.piControl.chrysalis] Success
[DEBUG][0][WorkPost][LeafAreaIndex/AVHRR][20231012.v3alpha04_trigrid_bgc.piControl.chrysalis]
Traceback (most recent call last):
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.0_login/bin/ilamb-run", line 426, in WorkPost
    c.modelPlots(m)
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.0_login/lib/python3.10/site-packages/ILAMB/Confrontation.py", line 710, in modelPlots
    self._relationship(m)
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.0_login/lib/python3.10/site-packages/ILAMB/Confrontation.py", line 1153, in _relationship
    _plotDistribution(ref_dist[0],ref_dist[1],ref_dist[2],
  File "/lcrc/soft/climate/e3sm-unified/base/envs/e3sm_unified_1.9.0_login/lib/python3.10/site-packages/ILAMB/Confrontation.py", line 1003, in _plotDistribution
    cmap = 'plasma' if 'plasma' in plt.cm.cmap_d else 'summer')
AttributeError: module 'matplotlib.cm' has no attribute 'cmap_d'

I'm wondering if this is a bug specific for v2.6 and earlier?

Thank you!

NaN problem

I think that ILAMB already handles the float pointing exception very well. But today I ran ILAMBv2.5 with the hash tag 672ce40 and found that there are still some NaNs for the actual scores in the result tables. When ILAMB tries to scale these scores to compute the relative scores, the NaN is included and it made all other valid actual scores to be NaNs and the metrics for all models showed missing values in the land page.

python3: geos_ts_c.cpp:3991: int GEOSCoordSeq_getSize_r(GEOSContextHandle_t, const geos::geom::CoordinateSequence*, unsigned int*): Assertion `0 != cs' failed.

Found this error while running ilamb-run. The call of ilamb-setup results in the same libgeos assertion error. The solution was:

1 - uninstall ILAMB, cartopy and shapely using pip
2 - reinstall shapely from source
$ pip3 install shapely --no_binary shapely
3 - reinstall ILAMB from the git repo

I have found this solution while websearching the error. I just did not find the explanation for it.
The same error arises when running the sample datasets.

The messages from my shell:


Searching for model results in /home/jdarela/Desktop/caete/CAETE_BENCHMARK/ILAMB_BENCHMARKS/MODELS/

                                        CAETE-CNP

Parsing config file caete_bcmk.cfg...

                      GrossPrimaryProduction/GBAF Initialized

Running model-confrontation pairs...

                      GrossPrimaryProduction/GBAF CAETE-CNP            Completed  0:00:01

Finishing post-processing which requires collectives...

Geometry must be a Point or LineString
python3: geos_ts_c.cpp:3991: int GEOSCoordSeq_getSize_r(GEOSContextHandle_t, const geos::geom::CoordinateSequence*, unsigned int*): Assertion `0 != cs' failed.
[zabele:16054] *** Process received signal ***
[zabele:16054] Signal: Aborted (6)
[zabele:16054] Signal code:  (-6)
[zabele:16054] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x3f040)[0x7f549203f040]
[zabele:16054] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7)[0x7f549203efb7]
[zabele:16054] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x141)[0x7f5492040921]
[zabele:16054] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x3048a)[0x7f549203048a]
[zabele:16054] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x30502)[0x7f5492030502]
[zabele:16054] [ 5] /usr/lib/x86_64-linux-gnu/libgeos_c.so.1(GEOSCoordSeq_getSize_r+0x115)[0x7f546d242645]
[zabele:16054] [ 6] /home/jdarela/.local/lib/python3.6/site-packages/cartopy/trace.cpython-36m-x86_64-linux-gnu.so(+0x17346)[0x7f546d4ba346]
[zabele:16054] [ 7] /usr/bin/python3(_PyObject_FastCallKeywords+0x19c)[0x5a9dac]
[zabele:16054] [ 8] /usr/bin/python3[0x50a433]
[zabele:16054] [ 9] /usr/bin/python3(_PyEval_EvalFrameDefault+0x444)[0x50beb4]
[zabele:16054] [10] /usr/bin/python3[0x5095c8]
[zabele:16054] [11] /usr/bin/python3[0x50a2fd]
[zabele:16054] [12] /usr/bin/python3(_PyEval_EvalFrameDefault+0x444)[0x50beb4]
[zabele:16054] [13] /usr/bin/python3[0x507be4]
[zabele:16054] [14] /usr/bin/python3[0x509900]
[zabele:16054] [15] /usr/bin/python3[0x50a2fd]
[zabele:16054] [16] /usr/bin/python3(_PyEval_EvalFrameDefault+0x444)[0x50beb4]
[zabele:16054] [17] /usr/bin/python3[0x507be4]
[zabele:16054] [18] /usr/bin/python3[0x509900]
[zabele:16054] [19] /usr/bin/python3[0x50a2fd]
[zabele:16054] [20] /usr/bin/python3(_PyEval_EvalFrameDefault+0x444)[0x50beb4]
[zabele:16054] [21] /usr/bin/python3[0x507be4]
[zabele:16054] [22] /usr/bin/python3[0x509900]
[zabele:16054] [23] /usr/bin/python3[0x50a2fd]
[zabele:16054] [24] /usr/bin/python3(_PyEval_EvalFrameDefault+0x1226)[0x50cc96]
[zabele:16054] [25] /usr/bin/python3[0x5095c8]
[zabele:16054] [26] /usr/bin/python3[0x50a2fd]
[zabele:16054] [27] /usr/bin/python3(_PyEval_EvalFrameDefault+0x444)[0x50beb4]
[zabele:16054] [28] /usr/bin/python3[0x507be4]
[zabele:16054] [29] /usr/bin/python3[0x509900]
[zabele:16054] *** End of error message ***
Aborted (core dumped)```

URLError during post-processing

During the post-processing phase of ilamb-run, most of the plotting fails with a red exception titled URLError. Digging into the logfile I see that cartopy is trying to download a file, but the connection times out.

Add continuous integration using our test suite

ILAMB does have a test suite that until now I have manually managed. Before making non-trivial commits to master, I go into the test directory and run make. This will execute a reduced set of confrontations against a anonymous model and compare overall scores to the values in scores_test.csv.gold. This means that our current test coverage is based on aspects which would change overall scores (absolute tol = 1e-12).

There are a number of challenges involved in making this run automatically on PR's to master:

  • ILAMB would need to be deploy-able automatically, which we can now do via conda. Currently this installs a fixed version, would have to set it up to install the PR.
  • The observational and model data must be downloaded. I am not certain of the current size of the required test data, but it is large. CI is available to use for free if we do not need to run too long / very often. In order to make use of this we would need to trim down the model / obs data and perhaps even coarsen it. The tests currently take ~10 minutes to run on 4 processors.
  • As ILAMB has grown, I have not continued to add tests to cover new areas. IOMB is unprotected, as is the diurnal work.

What is the conventional way to deal with the unit difference between the benchmark data for the same variable?

Hi
I encountered a problem. I am creating the netcdf file to contain the model data I have.
When I compared the albedo from the model to the benchmark datasets from CERES and GEWEX.SRB, I found that the albedo units in the two different benchmark datasets are different. And the albedo unit from the model is consistent with CERES while not with GEWEX.SRB. So when I run ILAMB to compare the model albedo with the GEWEX.SRB, it raised the UnitConversionError.

My question is is there any conventional way to define the alternative units for the albedo from the model to be consistent with each of the benchmark dataset in the configuration file or by any other way instead of recreating the nc file that contains the albedo from the model with the consistent unit as the benchmark dataset?

Thanks

Allow fractional pixels when generating input shapefile masks

The ability to create masks from input shapefiles is very welcome, but we ran into an issue that I think can be fairly easily resolved. It seems that the masking procedure considers each pixel either "all in" or "all out", i.e. binary. In the case where a polygon is on the order of the pixel size of the raster being sampled, e.g.

image,

it would be great for the mask pixels to take fractional values (i.e. not binary), where the value represents the fraction of the pixel that is covered by the polygon. Then (if not already implemented), the mask can be used as a weights grid when doing the spatial aggregations for any raster.

We have a number of smaller watersheds that might overlap 2-4 pixels for coarser datasets and would appreciate this kind of precise zonal statistics.

treating json as js is not working on compy

Currently, ILAMB is using "<script type="text/javascript" src="scalars.json"></script>" to load the json file storing scores. But it is not a regular way to load json file in html page. On compy, the server enables a strict MIME type checking. The way to load json files will get an error as follows:

Refused to execute script from 'https://compy-dtn.pnl.gov/minxu/myresults/_build/scalars.json' because its MIME type ('application/json') is not executable, and strict MIME type checking is enabled.
Using ajax may solve the problem.

Gridded benchmarking data must be global coverage?

I'd like to define custom regions based on a regions.txt file and gridded benchmarking data of the USA region.

As the tutorial says, 'ILAMB expects that the analysis will be performed over at least the global region. Overall scores are based on information in that region.'.

Does this mean that if the gridded benchmarking datasets must be globally, rather than regionally?

@nocollier , your help will be greatly appreciated!!

Why adding the print or logger.debug or logger.info in the py files under the src/ILAMB doesn't work ?

Hi,

I have a question to ask about the way to debug. I installed ilamb and ran the example provided in the tutorial successfully. Then I applied to use it with my own data. However, there is an error saying VarNotComparable. And I tried to add the print or logger.debug or logger.info in the ilamblib.py to print out the values of some variables. However, I can't see the message print out in either the log file or the ILAMB03.log file. So my question why the print or logger added in the source py files under src/ILAMB doesn't work? What should I do if I want to print out the values of some variables ?

Thanks.

"Full functionality" documentation for configuration language does not exist.

On the 'First Steps' tutorial, under 'Configure File' the last paragraph says: "The configuration language is small, but allows you to change a lot of the behavior of the system. The full functionality is documented here." but the link ("here") points to "doc/nope_not_yet" , which I assume is/was a placeholder link.

Does the full functionality documentation for the configuration language exist somewhere else and this link was simply a placeholder?

Running ILAMB with model output split across multiple years

Should it be possible to run ILAMB with model output that is split across multiple years? I would like to avoid creating a "combined copy" of my model outputs if possible and point ILAMB to each "per year" model output, i.e.

ILAMB_ROOT/MODELS/
└── CABLE
    ├── cable_out_000.nc  # year: 1900
    ├── cable_out_001.nc  # year: 1901
    ├── ...
    └── cable_out_XXX.nc  # year: N

If I try to run ILAMB with the above directory structure (with two model output files corresponding to two consecutive years), ILAMB will throw an error in the final post processing step:

[INFO][0][<module>] Linux gadi-cpu-clx-1223.gadi.nci.org.au 4.18.0-477.15.1.el8.nci.x86_64 #1 SMP Sun Jul 16 11:36:47 AEST 2023 x86_64 x86_64
[INFO][0][<module>] /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/ILAMB (2.7)
[INFO][0][<module>] /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/numpy (1.23.5)
[INFO][0][<module>] /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/matplotlib (3.7.2)
[INFO][0][<module>] /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/netCDF4 (1.6.0)
[INFO][0][<module>] /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/cf_units (3.2.0)
[INFO][0][<module>] /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/sympy (1.12)
[INFO][0][<module>] /g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/mpi4py (3.1.4)
[INFO][0][<module>] 2023-10-31 15:39:01.421098
[INFO][0][MakeComparable][Albedo/GEWEX.SRB][CABLE] Spatial data was clipped from the reference:  before: [ 36 360 720] after: [ 36 279 720]
[INFO][0][AnalysisMeanStateSpace][albedo] Bias scored using Collier2018
[INFO][0][AnalysisMeanStateSpace][albedo] RMSE scored using Collier2018
[INFO][0][confront][Albedo/GEWEX.SRB][CABLE] Success
[DEBUG][0][WorkPost][Albedo/GEWEX.SRB][CABLE]
Traceback (most recent call last):
  File "/g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/bin/ilamb-run", line 616, in WorkPost
    c.modelPlots(m)
  File "/g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/ILAMB/Confrontation.py", line 925, in modelPlots
    var = Variable(
  File "/g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/ILAMB/Variable.py", line 140, in __init__
    out = il.FromNetCDF4(
  File "/g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/ILAMB/ilamblib.py", line 941, in FromNetCDF4
    t, t_bnd, cbounds, begin, end, calendar = GetTime(
  File "/g/data/hh5/public/apps/miniconda3/envs/analysis3-23.04/lib/python3.9/site-packages/ILAMB/ilamblib.py", line 359, in GetTime
    raise ValueError(msg)
ValueError: Time intervals defined in ./_build/RadiationandEnergyCycle/Albedo/GEWEX.SRB/GEWEX.SRB_CABLE.nc:time_bnds_ are not continuous

[INFO][0][<module>][total time] 40.5 s

The above exception occurs due to problematic time bounds in _build/RadiationandEnergyCycle/Albedo/GEWEX.SRB/GEWEX.SRB_CABLE.nc. Inspecting the time bounds with ncdump shows that boundary points are not contiguous:

   time_bnds_ =
  49261, 49290.5,
  49290.5, 49320,
  49320, 49349.5,
  49349.5, 49380,
  49380, 49410.5,
  49410.5, 49441,
  49441, 49471.5,
  49471.5, 49502.5,
  49502.5, 49533,
  49533, 49563.5,
  49563.5, 49594,
  49594, 49624.5,   <--- these two time bounds do not share a boundary
  49625.5, 49655.5, <---
  49655.5, 49685.5,
  49685.5, 49714.5,
  49714.5, 49745,
  49745, 49775.5,
  49775.5, 49806,
  49806, 49836.5,
  49836.5, 49867.5,
  49867.5, 49898,
  49898, 49928.5,
  49928.5, 49959 ;

It looks like these time bounds are being set in CombineVariables - when creating the merged variable from the list of variables, the time bounds for each variable are concatenated together, however, a shared boundary point between each variable is not created. See below:

ILAMB/src/ILAMB/ilamblib.py

Lines 2641 to 2651 in 8d1e9cf

# Assemble the data
shp = (nt.sum(),) + V[0].data.shape[1:]
time = np.zeros(shp[0])
time_bnds = np.zeros((shp[0], 2))
data = np.zeros(shp)
mask = np.zeros(shp, dtype=bool)
for i, v in enumerate(V):
time[ind[i] : ind[i + 1]] = v.time
time_bnds[ind[i] : ind[i + 1], ...] = v.time_bnds
data[ind[i] : ind[i + 1], ...] = v.data
mask[ind[i] : ind[i + 1], ...] = v.data.mask

For reference, this is the configuration and command I use to run ILAMB:

# test.cfg
[h1: Radiation and Energy Cycle]
bgcolor = "#FFECE6"

[h2: Albedo]
variable = "albedo"
alternate_vars = "Albedo"

[GEWEX.SRB]
source = "DATA/albedo/GEWEX.SRB/albedo_0.5x0.5.nc"
ilamb-run --config test.cfg --model_root $ILAMB_ROOT/MODELS/ --regions global --model_year 1903 1985 --study_limits 1985 1987

Potential inconsistency of diagnostics generation after update DATA and version

In the tests, ILAMB version is updated to v2.7 and DATA directory is updated with ilamb-fetch. This update does resolved #83, but I also noticed some inconsistencies.
Results:
Before updates
After updates

It looks like only a subset of Biomass results (2 out of 5) are showing up. And runoff, permafrost diagnostics are not shown.
And I did see name changes in directory structure in DATA, for instance: runoff directory is renamed as mrro, and sub folders of biomass are renamed, ect. Not sure if that is the cause. Pinging @nocollier , any insights are appreciated!

Variable.__init__: shifting of 'lon' and 'lon_bnds' incorrect

Calling the constructor Variable() with

lon = [ 5. 112.5 299.9783] and lon_bnds = None, leads by line 156 to
lon = [ 5. 112.5 -60.021698]. Assuming spatial data, line 176 produces

lon_bnds =
[[ -48.75 58.75 ]
[ 58.75 26.239151]
[ 26.239151 -146.282547]],

where 112.5 and and 299.9783 already do not match their bounds.
This leads later to an assertion error in line 193, when

lon = [-60.021698 5. 112.5 ] and
lon_bnds =
[[-180. -146.282547]
[ -48.75 58.75 ]
[ 58.75 180. ]].

Errors from scoreboard.py at the end of a run

Hi there,

An error from scoreboard.py occurs when @acordonez and I ran ilamb though cmec-driver with latest master.

When i try to use ilamb-run to run ilamb standalone, I got the same error as below:

Traceback (most recent call last):
File "/home/zhang40/miniconda3/envs/ilamb/bin/ilamb-run", line 682, in
S.createHtml(M)
File "/home/zhang40/miniconda3/envs/ilamb/lib/python3.7/site-packages/ILAMB/Scoreboard.py", line 420, in createHtml
scores,rel_tree = self.createJSON(M)
File "/home/zhang40/miniconda3/envs/ilamb/lib/python3.7/site-packages/ILAMB/Scoreboard.py", line 393, in createJSON
rel_tree = GenerateRelationshipTree(self,M)
File "/home/zhang40/miniconda3/envs/ilamb/lib/python3.7/site-packages/ILAMB/Scoreboard.py", line 958, in GenerateRelationshipTree
h2 = Node(data.confrontation.longname)
AttributeError: 'NoneType' object has no attribute 'longname'

Any suggestions or insights are welcome. Thank you!

Unable to download ILAMB-Data

The command ilamb-fetch --remote_root http://ilamb.ornl.gov/ILAMB-Data --no-check-certificate fails because of a timeout issue. It looks like the data's been moved.

study_limits option seems not working for GlobalNetEcosystemCarbonBalance

Try to run ILAMBv2.5 with the hash tag 672ce40 and set the study_limits option "--study_limits 1850 2010", but I still can see "nbp(2011)" and "diff(2011)" in the result table. Moreover there are blanks in "nbp(2010)" and "diff(2010)" in the table.

Similar problem can be seen in the [link] (https://www.ilamb.org/CMIP6/historical/EcosystemandCarbonCycle/GlobalNetEcosystemCarbonBalance/GCP/GCP.html?model=E3SM-CTC). I do not why the value of E3SM-CTC is shown on 2007, but others are on 2015.

Error occurred when change regions to Aust

Hi,
I occurred a error when I try to run ilamb in Australia region which never happened in global region.
Screenshot 2023-07-19 at 10 03 18 am
it seems a post-processing problem when ilamb trying to get data from a .nc file but did not find it.
Dose anyone know how to deal with it?

add the changelog or released.md?

It seems that there is no document in the repository to describe the incremental changes among the versions. These documents are helpful for users to track the differences in results from the different versions of ILAMB.

Average locations, then score the result

Reading Collier et al 2018 it seems that the procedure for computing a score is as follows (using bias as example):

  1. calculate the relative bias error at a given location (equation 13)
  2. score the relative error for that location (equation 14)
  3. compute the scalar score as the average score across all locations (15)

But I believe a better procedure is:

  1. calculate the relative bias error
  2. average across all location
  3. score the result
    Or when scoring a given location, just steps 1 and 3.

Have I misread the paper? When you use the first method, tweaking \alpha in the scoring function can alter how models rank relative to one another, which isn't ideal.

Create a model configuration capability

Currently, ILAMB reads in a configure file which sets up the benchmark datasets in a hierarchy and provides a location to specify many options for how the analysis will take place. Setting up models is by comparison very simple, yet limited. We support two methods for setting up models, but neither is rich enough to allow us to implement:

  • model groups: we would use these to, say, group together CMIP5 models and then auto-generate a mean and display the models together with an offset in the overview image.
  • perturbation experiments: we would like to setup a model and specify other files which are to be considered perturbations to enable benchmarks like FACE or Will's paper.
  • result caching: this model setup could also specify a location for the intermediate results files to be stored. ILAMB could then check this location before rerunning an analysis. In this way, we could share these files so a model center need not rerun the whole collection--just their addition.

I suggest that initially we just build the model configuration setup to support what we currently do with models in ILAMB, but also keep in mind that the above is what we would like to use the capability to do.

Integration bounds

Using the method Variable.integrateInTime with t0 == tf should lead to a zero value as far as I interpret integrals. Unfortunately, it does not because

line 299: time_bnds[(t0>time_bnds[:,0])*(t0<time_bnds[:,1]),0] = t0

changes the variable time_bnds in a way that the idea of line

300: time_bnds[(tf>time_bnds[:,0])*(tf<time_bnds[:,1]),1] = tf

does not have any effect. So the end boundary of the integral is indeed larger than the value specified. Actually, I am not sure of what other things might happen in other situations. It is a tricky thing to check.

The same issue is very likely to happen in the integrateInDepth method, lines 418--423.

Installation on cheyenne requires change to ilamb-sysmpi.yml

Just documenting this here.
I use the following steps to install ILAMB on the NCAR machine cheyenne:
git clone https://github.com/rubisco-sfa/ILAMB.git ./ilamb_standalone
cd ilamb_standalone/
conda env create -f ilamb-sysmpi.yml
conda activate ilamb
pip install ./

In order for this to be successful, I have to make the following change to ilamb-sysmpi.yml:

diff --git a/ilamb-sysmpi.yml b/ilamb-sysmpi.yml
index 5c0a5d4..2dab295 100644
--- a/ilamb-sysmpi.yml
+++ b/ilamb-sysmpi.yml
@@ -19,5 +19,6 @@ dependencies:

--- cf_units
--- cython
--- psutil
+++ --- pip
--- pip:
--- mpi4py

Otherwise, I get an endless number of messages like this:

Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you.

I don't know if this is machine dependent or something specific with my environment. Not a big deal since I can install and run ILAMB successfully with this change.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.