Tutorial notebooks for the OGGM model, powered by hub.oggm.org and MyBinder.
Web: https://oggm.org/tutorials
License: BSD-3-Clause
Notebook tutorials for the Open Global Glacier Model (OGGM)
Home Page: https://oggm.org/tutorials
License: BSD 3-Clause "New" or "Revised" License
Tutorial notebooks for the OGGM model, powered by hub.oggm.org and MyBinder.
Web: https://oggm.org/tutorials
License: BSD-3-Clause
Instructions are here: https://jupyterbook.org/publish/gh-pages.html#automatically-host-your-book-with-github-actions
Currently I use ghp-import
locally - it's okay for now, but CI would help to detect issues, and avoid me the work.
Speaking against a CI build is that the build requires quite some time and resources...
@TimoRoth if you feel bored and have time, I'd welcome your help on this (not super urgent for now).
For info and to keep track here is the issue report for the notebook / pandas bug: pandas-dev/pandas#46268
Maybe also a more elegant solution in the notebook could be to initialise empty columns before assigning cenlon/cenlat like template['CenLon'] = ''
then the line template.loc[i, 'CenLon'] = np.array(cenlon)
could be undone to template.loc[i, 'CenLon'] = cenlon
again.
There is some error when I run the OGGM workflow after inputting the RGI version 7. For example, get_filepath("dem") went error during to dem.tif file saved as /tmp/OGGM/OGGM-GettingStarted-10m/per_glacier/RGI2000-v7.0-G-13/RGI2000-v7.0-G-13-53/RGI2000-v7.0-G-13-53637/DEM3/dem.tif
which cannot be get the path correctly.
Maybe the source code is outdated.
This is a consequence of: #12
(cc. @ehultee @@Suther11LBU)
I know @anoukvlug is working on something similar maybe these ideas help on that effort.
Part I:
How to calibrate the SMB with different climate data?
This section of the docs#climate data needs to be link to a tutorial that does the same climate tasks than the default preprocessing dirs in OGGM, with at least another example from the collections of climate databases. Probably showing how to run the following lines of code with different data sets.
@bearecinos:
The main issue for me and our student when using different climates was the precipitation factor. The winter_prcp_fac
should always be set to False, if for our own climate data we don't know what that is, and also make a sensitivity analysis for 'prcp_fac' (which is climate data dependent). cfg.PARAMS['use_winter_prcp_fac'] = False
and cfg.PARAMS['prcp_fac']= sensitivity analysis
.
Possible solutions:
file suffix
that can store multiple inversions done with different SMB estimates. (see Part II).Part II:
@bearecinos: With my student I encountered that is very hard to make a sensitivity studies in the same working directory due to the lack of functions that allow a file suffix, this means that we have to repeat models runs and output data, duplicating what we have with multiple working directories.
mass_conservation_inversion(gdir, glen_a=None, fs=None, write=True,filesuffix='', water_level=None,t_lambda=None)
But that passing argument does not follow to the inversion_tasks(gdirs, glen_a=None, fs=None, filter_inversion_output=True,add_to_log_file=True)
. Maybe this can be done but is not clear from the docs or the code.@pat-schmitt if you have some time I added a few TODOs to the notebook where some more info / examples would help. Thanks!
I just came across how to test jupyter notebooks https://www.blog.pythonlibrary.org/2018/10/16/testing-jupyter-notebooks/.
Maybe adding a simple test just checking if all tutorials pass without an error is a good idea?
But maybe it is overkill...
Hi @TimoRoth - or anyone interested in doing something different for a change 😉
I need help with the OGGM tutorials website. To get the link to work as I wanted I had to path sphinx-book-theme with a trivial change, but this change is unlikely to make it to the main codebase. For the context, see:
This hack is not satisfying because I want to be able to use future improvements of sphinx book theme. I know you are not a sphinx specialist, but neither am I -> do you think you could have a look at the extension idea I suggested in executablebooks/sphinx-book-theme#282? It's not super urgent but somewhat important I think
OGGM users want to learn from you!
If you have time and energy, you can document your own workflow, the one you used for Study A or study B: any topic welcome!
I changed some things inside the dynamic spinup function and tasks.merge_consecutive_run_outputs
is not needed anymore therefore I deleted it in the tutorial. Maybe we should still include it in some other tutorial to show it exists.
I have no good example in mind right now (maybe merge historical with one gcm run?)
Add an explanation of merge_gridded_data somewhere in the tutorials, I am not sure at the moment at which place.
Adapt distribute_flowline tutorial:
distributed_thickness
to simulated_thickness
in this notebook.I was trying to initialise the OGGM with this minimal python codes :
from oggm import cfg, utils, workflow, tasks, graphics
from oggm.core import flowline
import salem
import xarray as xr
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt
cfg.initialize(logging_level='WARNING')
** This command gives lot of errors. Given below. Please help me out.
2021-08-27 14:51:30: oggm.cfg: Reading default parameters from the OGGM params.cfg
configuration file.
2021-08-27 14:51:30: oggm.cfg: Multiprocessing switched OFF according to the parameter file.
2021-08-27 14:51:30: oggm.cfg: Multiprocessing: using all available processors (N=8)
HDF5ExtError Traceback (most recent call last)
/tmp/ipykernel_36346/3981835592.py in
7 import matplotlib.pyplot as plt
8
----> 9 cfg.initialize(logging_level='WARNING')
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/cfg.py in initialize(file, logging_level, params, future)
626 # Make sure we have a proper cache dir
627 from oggm.utils import download_oggm_files, get_demo_file
--> 628 download_oggm_files()
629
630 # Read in the demo glaciers
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in download_oggm_files()
832 def download_oggm_files():
833 with get_lock():
--> 834 return _download_oggm_files_unlocked()
835
836
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in _download_oggm_files_unlocked()
846 # download only if necessary
847 if not os.path.exists(sdir):
--> 848 ofile = file_downloader(zip_url)
849 with zipfile.ZipFile(ofile) as zf:
850 zf.extractall(odir)
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in file_downloader(www_path, retry_max, cache_name, reset, auth, timeout)
621 local_path = _progress_urlretrieve(www_path, cache_name=cache_name,
622 reset=reset, auth=auth,
--> 623 timeout=timeout)
624 # if no error, exit
625 break
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in _progress_urlretrieve(url, cache_name, reset, auth, timeout)
569 sys.stdout.flush()
570 res = oggm_urlretrieve(url, cache_obj_name=cache_name, reset=reset,
--> 571 reporthook=_upd, auth=auth, timeout=timeout)
572 try:
573 pbar.finish()
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in oggm_urlretrieve(url, cache_obj_name, reset, reporthook, auth, timeout)
544 return cache_path
545
--> 546 return _verified_download_helper(cache_obj_name, _dlf, reset)
547
548
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in _verified_download_helper(cache_obj_name, dl_func, reset)
356 if dl_verify and path and cache_obj_name not in cfg.DL_VERIFIED:
357 cache_section, cache_path = cache_obj_name.split('/', 1)
--> 358 data = get_dl_verify_data(cache_section)
359 if cache_path not in data.index:
360 logger.info('No known hash for %s' % cache_obj_name)
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in get_dl_verify_data(section)
261
262 try:
--> 263 data = pd.read_hdf(verify_file_path, key=section)
264 except KeyError:
265 data = pd.DataFrame()
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/pandas/io/pytables.py in read_hdf(path_or_buf, key, mode, errors, where, start, stop, columns, iterator, chunksize, **kwargs)
427 raise FileNotFoundError(f"File {path_or_buf} does not exist")
428
--> 429 store = HDFStore(path_or_buf, mode=mode, errors=errors, **kwargs)
430 # can't auto open/close if we are using an iterator
431 # so delegate to the iterator
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/pandas/io/pytables.py in init(self, path, mode, complevel, complib, fletcher32, **kwargs)
589 self._fletcher32 = fletcher32
590 self._filters = None
--> 591 self.open(mode=mode, **kwargs)
592
593 def fspath(self):
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/pandas/io/pytables.py in open(self, mode, **kwargs)
738 raise ValueError(msg)
739
--> 740 self._handle = tables.open_file(self._path, self._mode, **kwargs)
741
742 def close(self):
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/tables/file.py in open_file(filename, mode, title, root_uep, filters, **kwargs)
313
314 # Finally, create the File instance, and return it
--> 315 return File(filename, mode, title, root_uep, filters, **kwargs)
316
317
~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/tables/file.py in init(self, filename, mode, title, root_uep, filters, **kwargs)
776
777 # Now, it is time to initialize the File extension
--> 778 self._g_new(filename, mode, **params)
779
780 # Check filters and set PyTables format version for new files.
tables/hdf5extension.pyx in tables.hdf5extension.File._g_new()
HDF5ExtError: HDF5 error back trace
File "H5F.c", line 509, in H5Fopen
unable to open file
File "H5Fint.c", line 1652, in H5F_open
unable to read superblock
File "H5Fsuper.c", line 632, in H5F__super_read
truncated file: eof = 1572840, sblock->base_addr = 0, stored_eof = 46191419
End of HDF5 error back trace
Unable to open/create file '/home/sourav/.oggm/downloads.sha256.hdf'
Hi @TimoRoth (@fmaussion)
I'm not sure if I'm doing something wrong here but when I launch a OGGM notebook in binder. After the build, binder asks for a token or password.
The shell info of the build doesn't really give you enough time to copy the token to the binder login interface
The shell closes so fast that does not let you copy the token...
@dngoldberg have the same issue. Do we need to do something else?
maybe in our .jupyter/jupyter_notebook_config.py ? or this is something that has to be set up from your end?
On the website a cfl error occurs in the merging_glaciers notebook. But I could not reproduce the error locally, not sure how to proceed.
After the consequent refactoring of #114 it would be good to go back to the following checks:
linkcheck.sh
checks that all our links are working (some don't)Both can be done relatively quickly by changing notebook exectution to "off" in config.yaml
. (1) is more problematic because I don't know a way to do internal references in both standard markdown and myst markdown (used by jupyter-book). See also: https://github.com/orgs/executablebooks/discussions/1129
Based on some questions I received today, I was thinking maybe it could be useful to have a "10 min to get to know some useful features to simulate multiple glaciers" tutorial. This tutorial could explain things like continue on error, entity tasks, multi-processing and the different compile functions.
Feel free to comment and add ideas.
I would be happy to create such a tutorial in the coming weeks.
Hi everyone,
@ehultee, @Suther11LBU and I, did a bit of gathering of thoughts on the tutorials (among other things) that we have encountered, I will try my best to summarise things in different issues. This issue being the most general one (more issues will come to document a road map in the different repositories).
General comments
We feel the tutorials ATM need to be organised in order to find quickly the information. We are sure that many things that we need are already there, but it does take a while for experienced users (like myself) to find stuff so it is not straight forward for undergrads.
I propose (before suggesting new tutorials) .. that we make an organisation of existing tutorials and information.
Specific issues:
The current classification (beginners + advance & 10 mins to ...) its good, but maybe not as useful to find what you want. We were thinking we could add more categories or subcategories (short headlines at another page on the tutorials website, or an organization chart of tutorials, index, etc)
Possible solution:
@ehultee: suggested to add hyperlinked “tags”? My vision is that a tag would appear on every tutorial it applies to, and if you click on a tag you can see the other tutorials that have it? Then the tags would be the categories you list below.
Tags example (though we can probably think of better ones):
- OGGM shop
- Glacier initial state
- Glacier evolution
- Calibration
- Education! (see last point)
- etc
We could link each model task (info section) in docs.oggm.org with a tutorial & tag this would enhance the docs.oggm.org sections by adding a tutorial to each part of the docs workflow.
@ehultee:
Edu material (apps, helper functions, etc) has become more abstract and slick, and core OGGM has become more technical. There is a missing middle. As a result, these resources do not work well together in the same educational setting. I taught students starting with edu notebooks, and it was a huge leap for them to understand what the real OGGM workflow was doing. I ended up doing a lot of troubleshooting for them when they worked with core OGGM.
Hi all,
I am new to OGGM. I have to start my project, so I was setting up OGGM on my mac. I was following this tutorial from OGGM website Getting Started. I wrote the following lines of code in PyCharm:
from oggm import cfg
cfg.initialize()
Executing this throws an error:
ModuleNotFoundError: No module named 'oggm'
I installed OGGM as per the instruction at OGGM website Install docs .
How to proceed from here? In worst case how to remove everything OGGM related from my machine so that I can start fresh?
Thanks in advance
Hello everybody,
I'm fairly new to using OGGM. Therefore I am currently working through the OGGM tutorials. I tried the tutorial on Hydrological mass-balance output. The following error came up: ModuleNotFoundError: No module named 'oggm_edu'
. I have already updated OGGM, I've tried to install the package via conda install -c anaconda oggm_edu -y
, which unfortunately did not work, and I've tried to download the dependies again via conda install -c oggm -c conda-forge oggm-deps
but it told me that all requested packages are already installed.
I am working through the tutorial but when I get to the Glacier directories section at this point:
[5] from oggm import workflow
from oggm import DEFAULT_BASE_URL
DEFAULT_BASE_URL
I get the following error:
ImportError Traceback (most recent call last)
Cell In[5], line 2
1 from oggm import workflow
----> 2 from oggm import DEFAULT_BASE_URL
3 DEFAULT_BASE_URL
ImportError: cannot import name 'DEFAULT_BASE_URL' from 'oggm' (/usr/local/pyenv/versions/3.10.10/lib/python3.10/site-packages/oggm/init.py)
This makes it impossible for me to move forward and just wondered if there is something I can do to fix the issue? all of the other steps leading up to that point seem to have been run without errors.
Thanks so much,
Christina
Has something to do with the output directory, but could not find a quick fix.
As discussed in slack on the cluster-bremen channel the upcoming images should contain sklearn again. Should be fixed then.
I will work on a new tutorial notebook in which I will show how to merge the output of different projection runs (different RCPs and different GCMs for a selection of glaciers) into one xarray.Dataset
. And further (maybe in a second notebook) how to create interactive plots using this new 'complete' Dataset (e.g. yearly evolution of total volume (mean and std of all GCMs), monthly hydro output for different periods, ...)
Adding the hugonnet_maps is not in the oggm.shop tutorial.
In the dem_sources.ipynb, the tasks.define_glacier_region(gdir)
did not work for me. It gives an HTTPDownloadError for this link: [(https://e4ftl01.cr.usgs.gov/MEASURES/NASADEM_HGT.001/2000.02.11/NASADEM_HGT_n46e010.zip]. Does
I think one has to give the Earthdata Login credentials. Should we change the source to 'SRTM' for the tutorial as this needs no registering?
When calling workflow.merge_glacier_tasks() in the merging_glaciers.ipynb, there is again the HttpDownloaderror because of NASADEM if you have not registered at Earthdata ....
I did not find a way to switch the DEM here, otherwise we could say at the beginning that the notebook can only run when having done the registration?
@TimoRoth the builds are failing with a weird error : https://github.com/OGGM/tutorials/runs/8271511002?check_suite_focus=true
Do you think you can have a look at it soon? I'd like to have this built before the oggm workshop next week 😇
At least one place come to mind (see empty section there): https://github.com/OGGM/tutorials/blob/master/notebooks/where_are_the_flowlines.ipynb
Initially I was thinking about redistribute the flowlines, but the current runs are done without fl_diagnostics...
I'm currently running a test to see if it increases the data size a lot or not.
Just saw that there is a bug in the Ice thickness inversion notebook. The WORKING_DIR is not assigned to a variable.
I will have a short look at the other notebooks and create a PR afterwards.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.