Giter Site home page Giter Site logo

oggm / tutorials Goto Github PK

View Code? Open in Web Editor NEW
17.0 10.0 18.0 385.29 MB

Notebook tutorials for the Open Global Glacier Model (OGGM)

Home Page: https://oggm.org/tutorials

License: BSD 3-Clause "New" or "Revised" License

Shell 0.03% Jupyter Notebook 99.97%
oggm tutorials numerical-methods glaciers geoscience

tutorials's Introduction

tutorials's People

Contributors

afisc avatar anoukvlug avatar bearecinos avatar fmaussion avatar holmgren825 avatar keeptg avatar lilianschuster avatar pat-schmitt avatar timoroth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tutorials's Issues

Notebook / Pandas bug when assigning cenlon/cenlat

For info and to keep track here is the issue report for the notebook / pandas bug: pandas-dev/pandas#46268

Maybe also a more elegant solution in the notebook could be to initialise empty columns before assigning cenlon/cenlat like template['CenLon'] = '' then the line template.loc[i, 'CenLon'] = np.array(cenlon) could be undone to template.loc[i, 'CenLon'] = cenlon again.

Some bugs about file path after RGI dataset updated from v6 to v7

There is some error when I run the OGGM workflow after inputting the RGI version 7. For example, get_filepath("dem") went error during to dem.tif file saved as /tmp/OGGM/OGGM-GettingStarted-10m/per_glacier/RGI2000-v7.0-G-13/RGI2000-v7.0-G-13-53/RGI2000-v7.0-G-13-53637/DEM3/dem.tif which cannot be get the path correctly.
Maybe the source code is outdated.

Tutorial on computing thickness inversion from level 0, using different climate data (guidance on `prcp_fac`)

(cc. @ehultee @@Suther11LBU)

I know @anoukvlug is working on something similar maybe these ideas help on that effort.

Part I:

  • How to calibrate the SMB with different climate data?

    This section of the docs#climate data needs to be link to a tutorial that does the same climate tasks than the default preprocessing dirs in OGGM, with at least another example from the collections of climate databases. Probably showing how to run the following lines of code with different data sets.

    @bearecinos:
    The main issue for me and our student when using different climates was the precipitation factor. The winter_prcp_fac should always be set to False, if for our own climate data we don't know what that is, and also make a sensitivity analysis for 'prcp_fac' (which is climate data dependent). cfg.PARAMS['use_winter_prcp_fac'] = False and cfg.PARAMS['prcp_fac']= sensitivity analysis.

Possible solutions:

  • Ideally this tutorial should provide or indicate how users can come up with a good value, maybe table per region from values in the Literature?
  • Provide a task that calculates the precipitation factor based on winter prcp "as a quick stats" for a general climate data, maybe a simple scaling relation based on the statistics of winter precipitation in a general way?
  • Sensitivity experiments: what it’s done here. But we can’t make a sensitivity analysis on prcp_fac for all glaciers (without observations) if the inversion task does not allow for a file suffix that can store multiple inversions done with different SMB estimates. (see Part II).

Part II:

  • Lack of file suffix uses throughout the model workflow, making it harder to do sensitivity experiments file suffix kwargs are absent on some workflow tasks. This is an issue on its own (link to OGGM/oggm#1677).

@bearecinos: With my student I encountered that is very hard to make a sensitivity studies in the same working directory due to the lack of functions that allow a file suffix, this means that we have to repeat models runs and output data, duplicating what we have with multiple working directories.

  • Seems like you can set a suffix for certain tasks but that functionality is not carry out in upper level tasks e.g.
    we have a file suffix here: mass_conservation_inversion(gdir, glen_a=None, fs=None, write=True,filesuffix='', water_level=None,t_lambda=None) But that passing argument does not follow to the inversion_tasks(gdirs, glen_a=None, fs=None, filter_inversion_output=True,add_to_log_file=True). Maybe this can be done but is not clear from the docs or the code.

Patching sphinx-book-theme

Hi @TimoRoth - or anyone interested in doing something different for a change 😉

I need help with the OGGM tutorials website. To get the link to work as I wanted I had to path sphinx-book-theme with a trivial change, but this change is unlikely to make it to the main codebase. For the context, see:

This hack is not satisfying because I want to be able to use future improvements of sphinx book theme. I know you are not a sphinx specialist, but neither am I -> do you think you could have a look at the extension idea I suggested in executablebooks/sphinx-book-theme#282? It's not super urgent but somewhat important I think

Document your workflows!

OGGM users want to learn from you!

If you have time and energy, you can document your own workflow, the one you used for Study A or study B: any topic welcome!

tasks.merge_consecutive_run_outputs not in the dynamic spinup tutorial anymore

I changed some things inside the dynamic spinup function and tasks.merge_consecutive_run_outputs is not needed anymore therefore I deleted it in the tutorial. Maybe we should still include it in some other tutorial to show it exists.

I have no good example in mind right now (maybe merge historical with one gcm run?)

Add merge_gridded_data to tutorials

Add an explanation of merge_gridded_data somewhere in the tutorials, I am not sure at the moment at which place.

Adapt distribute_flowline tutorial:

  • Add an explanation of new merging functionality for dynamic runs to this tutorial, added in PR OGGM/oggm#1674.
  • Also need to rename distributed_thickness to simulated_thickness in this notebook.

OGGM and Colab

When I tried to use oggm package in colab, some special problem came:
image
and I checked the environment shows that:
image
What's the reason?

Issues on OGGM cfg.initialize command

I was trying to initialise the OGGM with this minimal python codes :

from oggm import cfg, utils, workflow, tasks, graphics
from oggm.core import flowline
import salem
import xarray as xr
import pandas as pd
import geopandas as gpd
import matplotlib.pyplot as plt

up to this portion the codes works perfectly

cfg.initialize(logging_level='WARNING')

** This command gives lot of errors. Given below. Please help me out.

2021-08-27 14:51:30: oggm.cfg: Reading default parameters from the OGGM params.cfg configuration file.
2021-08-27 14:51:30: oggm.cfg: Multiprocessing switched OFF according to the parameter file.
2021-08-27 14:51:30: oggm.cfg: Multiprocessing: using all available processors (N=8)


HDF5ExtError Traceback (most recent call last)
/tmp/ipykernel_36346/3981835592.py in
7 import matplotlib.pyplot as plt
8
----> 9 cfg.initialize(logging_level='WARNING')

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/cfg.py in initialize(file, logging_level, params, future)
626 # Make sure we have a proper cache dir
627 from oggm.utils import download_oggm_files, get_demo_file
--> 628 download_oggm_files()
629
630 # Read in the demo glaciers

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in download_oggm_files()
832 def download_oggm_files():
833 with get_lock():
--> 834 return _download_oggm_files_unlocked()
835
836

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in _download_oggm_files_unlocked()
846 # download only if necessary
847 if not os.path.exists(sdir):
--> 848 ofile = file_downloader(zip_url)
849 with zipfile.ZipFile(ofile) as zf:
850 zf.extractall(odir)

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in file_downloader(www_path, retry_max, cache_name, reset, auth, timeout)
621 local_path = _progress_urlretrieve(www_path, cache_name=cache_name,
622 reset=reset, auth=auth,
--> 623 timeout=timeout)
624 # if no error, exit
625 break

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in _progress_urlretrieve(url, cache_name, reset, auth, timeout)
569 sys.stdout.flush()
570 res = oggm_urlretrieve(url, cache_obj_name=cache_name, reset=reset,
--> 571 reporthook=_upd, auth=auth, timeout=timeout)
572 try:
573 pbar.finish()

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in oggm_urlretrieve(url, cache_obj_name, reset, reporthook, auth, timeout)
544 return cache_path
545
--> 546 return _verified_download_helper(cache_obj_name, _dlf, reset)
547
548

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in _verified_download_helper(cache_obj_name, dl_func, reset)
356 if dl_verify and path and cache_obj_name not in cfg.DL_VERIFIED:
357 cache_section, cache_path = cache_obj_name.split('/', 1)
--> 358 data = get_dl_verify_data(cache_section)
359 if cache_path not in data.index:
360 logger.info('No known hash for %s' % cache_obj_name)

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/oggm/utils/_downloads.py in get_dl_verify_data(section)
261
262 try:
--> 263 data = pd.read_hdf(verify_file_path, key=section)
264 except KeyError:
265 data = pd.DataFrame()

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/pandas/io/pytables.py in read_hdf(path_or_buf, key, mode, errors, where, start, stop, columns, iterator, chunksize, **kwargs)
427 raise FileNotFoundError(f"File {path_or_buf} does not exist")
428
--> 429 store = HDFStore(path_or_buf, mode=mode, errors=errors, **kwargs)
430 # can't auto open/close if we are using an iterator
431 # so delegate to the iterator

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/pandas/io/pytables.py in init(self, path, mode, complevel, complib, fletcher32, **kwargs)
589 self._fletcher32 = fletcher32
590 self._filters = None
--> 591 self.open(mode=mode, **kwargs)
592
593 def fspath(self):

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/pandas/io/pytables.py in open(self, mode, **kwargs)
738 raise ValueError(msg)
739
--> 740 self._handle = tables.open_file(self._path, self._mode, **kwargs)
741
742 def close(self):

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/tables/file.py in open_file(filename, mode, title, root_uep, filters, **kwargs)
313
314 # Finally, create the File instance, and return it
--> 315 return File(filename, mode, title, root_uep, filters, **kwargs)
316
317

~/miniconda2/envs/oggm_env/lib/python3.7/site-packages/tables/file.py in init(self, filename, mode, title, root_uep, filters, **kwargs)
776
777 # Now, it is time to initialize the File extension
--> 778 self._g_new(filename, mode, **params)
779
780 # Check filters and set PyTables format version for new files.

tables/hdf5extension.pyx in tables.hdf5extension.File._g_new()

HDF5ExtError: HDF5 error back trace

File "H5F.c", line 509, in H5Fopen
unable to open file
File "H5Fint.c", line 1652, in H5F_open
unable to read superblock
File "H5Fsuper.c", line 632, in H5F__super_read
truncated file: eof = 1572840, sblock->base_addr = 0, stored_eof = 46191419

End of HDF5 error back trace

Unable to open/create file '/home/sourav/.oggm/downloads.sha256.hdf'

Binder asking for token or password when notebooks are launch

Hi @TimoRoth (@fmaussion)

I'm not sure if I'm doing something wrong here but when I launch a OGGM notebook in binder. After the build, binder asks for a token or password.
The shell info of the build doesn't really give you enough time to copy the token to the binder login interface

image

The shell closes so fast that does not let you copy the token...

@dngoldberg have the same issue. Do we need to do something else?
image

maybe in our .jupyter/jupyter_notebook_config.py ? or this is something that has to be set up from your end?

Check links and references

After the consequent refactoring of #114 it would be good to go back to the following checks:

  1. warnings when building the book. They all come from the fact that we use "hacks" for internal references within a notebook while jupyter-book uses myst syntax
  2. linkcheck.sh checks that all our links are working (some don't)

Both can be done relatively quickly by changing notebook exectution to "off" in config.yaml. (1) is more problematic because I don't know a way to do internal references in both standard markdown and myst markdown (used by jupyter-book). See also: https://github.com/orgs/executablebooks/discussions/1129

10 min to get to know useful features to simulate multiple glaciers

Based on some questions I received today, I was thinking maybe it could be useful to have a "10 min to get to know some useful features to simulate multiple glaciers" tutorial. This tutorial could explain things like continue on error, entity tasks, multi-processing and the different compile functions.

Feel free to comment and add ideas.

I would be happy to create such a tutorial in the coming weeks.

Improving tutorials for v1.6 (proposed roadmap: tags)

Hi everyone,

@ehultee, @Suther11LBU and I, did a bit of gathering of thoughts on the tutorials (among other things) that we have encountered, I will try my best to summarise things in different issues. This issue being the most general one (more issues will come to document a road map in the different repositories).

General comments

We feel the tutorials ATM need to be organised in order to find quickly the information. We are sure that many things that we need are already there, but it does take a while for experienced users (like myself) to find stuff so it is not straight forward for undergrads.
I propose (before suggesting new tutorials) .. that we make an organisation of existing tutorials and information.

Specific issues:

  • The current classification (beginners + advance & 10 mins to ...) its good, but maybe not as useful to find what you want. We were thinking we could add more categories or subcategories (short headlines at another page on the tutorials website, or an organization chart of tutorials, index, etc)

    Possible solution:
    @ehultee: suggested to add hyperlinked “tags”? My vision is that a tag would appear on every tutorial it applies to, and if you click on a tag you can see the other tutorials that have it? Then the tags would be the categories you list below.

    Tags example (though we can probably think of better ones):
    - OGGM shop
    - Glacier initial state
    - Glacier evolution
    - Calibration
    - Education! (see last point)
    - etc

  • We could link each model task (info section) in docs.oggm.org with a tutorial & tag this would enhance the docs.oggm.org sections by adding a tutorial to each part of the docs workflow.

@ehultee:
Edu material (apps, helper functions, etc) has become more abstract and slick, and core OGGM has become more technical. There is a missing middle. As a result, these resources do not work well together in the same educational setting. I taught students starting with edu notebooks, and it was a huge leap for them to understand what the real OGGM workflow was doing. I ended up doing a lot of troubleshooting for them when they worked with core OGGM.

  • Maybe there needs to be an “education” tag (on the tutorial tags I suggested above) to clarify what is aiming to help students/educators, and what is intended as a “live documentation” for research users? For example, the advanced MB calibration tutorial is likely not intended for students...

Error "ModuleNotFoundError: No module named 'oggm'", PyCharm, macOS (Big sur)

Hi all,
I am new to OGGM. I have to start my project, so I was setting up OGGM on my mac. I was following this tutorial from OGGM website Getting Started. I wrote the following lines of code in PyCharm:

from oggm import cfg
cfg.initialize()

Executing this throws an error:

ModuleNotFoundError: No module named 'oggm'

I installed OGGM as per the instruction at OGGM website Install docs .

How to proceed from here? In worst case how to remove everything OGGM related from my machine so that I can start fresh?

Thanks in advance

`ModuleNotFoundError: No module named 'oggm_edu'`

Hello everybody,

I'm fairly new to using OGGM. Therefore I am currently working through the OGGM tutorials. I tried the tutorial on Hydrological mass-balance output. The following error came up: ModuleNotFoundError: No module named 'oggm_edu'. I have already updated OGGM, I've tried to install the package via conda install -c anaconda oggm_edu -y, which unfortunately did not work, and I've tried to download the dependies again via conda install -c oggm -c conda-forge oggm-deps but it told me that all requested packages are already installed.

Thank you!
Bildschirmfoto 2021-06-28 um 9 47 40 AM (2)

Issue on page /notebooks/10minutes/preprocessed_directories.html

I am working through the tutorial but when I get to the Glacier directories section at this point:

[5] from oggm import workflow
from oggm import DEFAULT_BASE_URL
DEFAULT_BASE_URL

I get the following error:

ImportError Traceback (most recent call last)
Cell In[5], line 2
1 from oggm import workflow
----> 2 from oggm import DEFAULT_BASE_URL
3 DEFAULT_BASE_URL

ImportError: cannot import name 'DEFAULT_BASE_URL' from 'oggm' (/usr/local/pyenv/versions/3.10.10/lib/python3.10/site-packages/oggm/init.py)

This makes it impossible for me to move forward and just wondered if there is something I can do to fix the issue? all of the other steps leading up to that point seem to have been run without errors.
Thanks so much,
Christina

New Workflow Tutorial (how to merge output of different projection runs and visualize)

I will work on a new tutorial notebook in which I will show how to merge the output of different projection runs (different RCPs and different GCMs for a selection of glaciers) into one xarray.Dataset. And further (maybe in a second notebook) how to create interactive plots using this new 'complete' Dataset (e.g. yearly evolution of total volume (mean and std of all GCMs), monthly hydro output for different periods, ...)

problem with tasks.define_glacier_region

In the dem_sources.ipynb, the tasks.define_glacier_region(gdir) did not work for me. It gives an HTTPDownloadError for this link: [(https://e4ftl01.cr.usgs.gov/MEASURES/NASADEM_HGT.001/2000.02.11/NASADEM_HGT_n46e010.zip]. Does

I think one has to give the Earthdata Login credentials. Should we change the source to 'SRTM' for the tutorial as this needs no registering?

problem with NASADEM in merging_glaciers.ipynb

When calling workflow.merge_glacier_tasks() in the merging_glaciers.ipynb, there is again the HttpDownloaderror because of NASADEM if you have not registered at Earthdata ....

I did not find a way to switch the DEM here, otherwise we could say at the beginning that the notebook can only run when having done the registration?

Bug in Ice thickness inversion notebook

Just saw that there is a bug in the Ice thickness inversion notebook. The WORKING_DIR is not assigned to a variable.
I will have a short look at the other notebooks and create a PR afterwards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.