A modular open source glacier model in Python¶
The model accounts for glacier geometry (including contributory branches) and includes an explicit ice dynamics module. It can simulate past and future mass-balance, volume and geometry of (almost) any glacier in the world in a fully automated and extensible workflow. We rely exclusively on publicly available data for calibration and validation.
Note
This is the software documentation: for general information about the OGGM project and related news, visit oggm.org.
Warning
This is the model documentation for users and developers as of version 1.1. For the documentation of the latest (cutting-edge) repository version, visit docs.oggm.org/en/latest.
Principles¶
Physical principles implemented in the model and their underlying assumptions, with as little code as possible. For more detailed information, we recommend to read the OGGM description paper as well.
Introduction¶
We illustrate with an example how the OGGM workflow is applied to the Tasman Glacier in New Zealand (see figure below). Here we describe shortly the purpose of each processing step, while more details are provided in the other sections:
- Preprocessing
The glacier outlines extracted from a reference dataset (RGI) and are projected onto a local gridded map of the glacier (Fig. a). Depending on the glacier location, a suitable source for the topographical data is downloaded automatically and interpolated to the local grid. The map spatial resolution depends on the size of the glacier.
- Flowlines
The glacier centerlines are computed using a geometrical routing algorithm (Fig. b), then filtered and slightly modified to become glacier “flowlines” with a fixed grid spacing (Fig. c).
- Catchment areas and widths
The geometrical widths along the flowlines are obtained by intersecting the normals at each grid point with the glacier outlines and the tributaries’ catchment areas. Each tributary and the main flowline has a catchment area, which is then used to correct the geometrical widths so that the flowline representation of the glacier is in close accordance with the actual altitude-area distribution of the glacier (Fig. d).
- Climate data and mass-balance
Gridded climate data (monthly temperature and precipitation) are interpolated to the glacier location and corrected for altitude at each flowline’s grid point. A carefully calibrated temperature-index model is used to compute the mass-balance for any month in the past.
- Ice thickness inversion
Using the mass-balance data computed above and relying on mass-conservation considerations, an estimate of the ice flux along each glacier cross-section can be computed. by making assumptions about the shape of the cross-section (parabolic or rectangular) and using the physics of ice flow, the model computes the thickness of the glacier along the flowlines and the total volume of the glacier (Fig. e).
- Glacier evolution
A dynamical flowline model is used to simulate the advance and retreat of the glacier under preselected climate time series. Here (Fig. f), a 120-yrs long random climate sequence leads to a glacier advance.
Glacier flowlines¶
Centerlines¶
Computing the centerlines is the first task to run after the definition of the local map and topography.
Our algorithm is an implementation of the procedure described by Kienholz et al., (2014). Appart from some minor changes (mostly the choice of certain parameters), we stayed close to the original algorithm.
The basic idea is to find the terminus of the glacier (its lowest point) and a series of flowline “heads” (local elevation maxima). The centerlines are then computed with a least cost routing algorithm minimizing both (i) the total elevation gain and (ii) the distance from the glacier outline:
In [1]: graphics.plot_centerlines(gdir)

The glacier has a major centerline (the longest one), and tributary branches (in this case: two).
At this stage, the centerlines are still not fully suitable for modelling. Therefore, a rather simple procedure converts them to “flowlines”, which now have a regular coordinate spacing (which they will keep for the rest of the workflow). The tail of the tributaries are cut according to a distance threshold rule:
In [2]: graphics.plot_centerlines(gdir, use_flowlines=True)

Downstream lines¶
For the glacier to be able to grow we need to determine the flowlines downstream of the current glacier geometry:
In [3]: graphics.plot_centerlines(gdir, use_flowlines=True, add_downstream=True)

The downsteam lines area also computed using a routing algorithm minimizing the distance between the glacier terminus and the border of the map as well as the total elevation gain, therefore following the valley floor.
Catchment areas¶
Each flowline has its own “catchment area”. These areas are computed using similar flow routing methods as the one used for determining the flowlines. Their purpose is to attribute each glacier pixel to the right tributory in order to compute mass gain and loss for each tributary. This will also influence the later computation of the glacier widths).
In [4]: tasks.catchment_area(gdir)
In [5]: graphics.plot_catchment_areas(gdir)

Flowline widths¶
Finally, the flowline widths are computed in two steps.
First, we compute the geometrical width at each grid point. The width is drawn from the intersection of a line perpendicular to the flowline and either (i) the glacier outlines or (ii) the catchment boundaries:
In [6]: tasks.catchment_width_geom(gdir)
In [7]: graphics.plot_catchment_width(gdir)

Then, these geometrical widths are corrected so that the altitude-area distribution of the “flowline-glacier” is as close as possible as the actual distribution of the glacier using its full 2D geometry:
In [8]: tasks.catchment_width_correction(gdir)
In [9]: graphics.plot_catchment_width(gdir, corrected=True)

Note that a perfect match is not possible since the sample size is not the same between the “1.5D” and the 2D representation of the glacier.
Implementation details¶
Shared setup for these examples:
import geopandas as gpd
import oggm
from oggm import cfg, tasks
from oggm.utils import get_demo_file, gettempdir
cfg.initialize()
cfg.set_intersects_db(get_demo_file('rgi_intersect_oetztal.shp'))
cfg.PATHS['dem_file'] = get_demo_file('hef_srtm.tif')
base_dir = gettempdir('Flowlines_Docs')
entity = gpd.read_file(get_demo_file('HEF_MajDivide.shp')).iloc[0]
gdir = oggm.GlacierDirectory(entity, base_dir=base_dir)
tasks.define_glacier_region(gdir, entity=entity)
tasks.glacier_masks(gdir)
tasks.compute_centerlines(gdir)
tasks.initialize_flowlines(gdir)
tasks.compute_downstream_line(gdir)
Mass-balance¶
The mass-balance (MB) model implemented in OGGM is an extended version of the temperature index model presented by Marzeion et al., (2012). While the equation governing the mass-balance is that of a traditional temperature index model, our special approach to calibration requires that we spend some time describing it.
Climate data¶
The MB model implemented in OGGM needs monthly time series of temperature and precipitation. The current default is to download and use the CRU TS data provided by the Climatic Research Unit of the University of East Anglia.
CRU (default)¶
If not specified otherwise, OGGM will automatically download and unpack the latest dataset from the CRU servers.
Warning
While the downloaded zip files are ~370mb in size, they are ~5.6Gb large after decompression!
The raw, coarse (0.5°) dataset is then downscaled to a higher resolution grid (CRU CL v2.0 at 10’ resolution) following the anomaly mapping approach described by Tim Mitchell in his CRU faq (Q25). Note that we don’t expect this downscaling to add any new information than already available at the original resolution, but this allows us to have an elevation-dependent dataset based on a presumably better climatology. The monthly anomalies are computed following [Harris_etal_2010] : we use standard anomalies for temperature and scaled (fractional) anomalies for precipitation.
HISTALP¶
If required by the user, OGGM can also automatically download and use the data from the HISTALP dataset.
The data is available at 5’ resolution (about 0.0833°) from 1801 to 2014.
However, the data is considered spurious before 1850. Therefore, we
recommend to use data from 1850 onwards. This can be done by setting
cfg.PARAMS['baseline_y0'] = 1850
.
In [1]: example_plot_temp_ts() # the code for these examples is posted below
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-0e7c6cd60648> in <module>
----> 1 example_plot_temp_ts() # the code for these examples is posted below
_code/prepare_climate.py in example_plot_temp_ts()
83 d = xr.open_dataset(gdir.get_filepath('climate_monthly'))
84 temp = d.temp.resample(time='12MS').mean('time').to_series()
---> 85 del temp.index.name
86 temp.plot(figsize=(8, 4), label='Annual temp')
87 tsm = temp.rolling(31, center=True, min_periods=15).mean()
AttributeError: can't delete attribute

User-provided dataset¶
You can provide any other dataset to OGGM. See the HISTALP_oetztal.nc data file in the OGGM sample-data folder for an example format.
GCM data¶
OGGM can also use climate model output to drive the mass-balance model. In this case we still rely on gridded observations (CRU) for the baseline climatology and apply the GCM anomalies computed from a preselected reference period (currently: 1961-1990). This method is often called the delta method.
Currently we can process data from the
CESM Last Millenium Ensemble
project (see tasks.process_cesm_data()
) only, but adding other models
will be available soon.
Elevation dependency¶
OGGM needs to compute the temperature and precipitation at the altitude of the glacier grid points. The default is to use a fixed gradient of -6.5K km \(^{-1}\) and no gradient for precipitation. However, OGGM also implements an optional algorithm which computes the local gradient by linear regression of the 9 surrounding grid points. This method requires that the near-surface temperature lapse-rates provided by the climate dataset are good (i.e.: in most of the cases, you should probably use the simple fixed gradient instead).
Temperature index model¶
The monthly mass-balance \(B_i\) at elevation \(z\) is computed as:
where \(P_i^{Solid}\) is the monthly solid precipitation, \(T_i\)
the monthly temperature and \(T_{Melt}\) is the monthly mean air
temperature above which ice melt is assumed to occur (0°C per default).
Solid precipitation is computed out of the total precipitation. The fraction of
solid precipitation is based on the monthly mean temperature: all solid below
temp_all_solid
(default: 0°C) all liquid above temp_all_liq
(default: 2°C), linear in between.
The parameter \(\mu ^{*}\) indicates the temperature sensitivity of the glacier, and it needs to be calibrated.
Calibration¶
We will start by making two observations:
the sensitivity parameter \(\mu ^{*}\) is depending on many parameters, most of them being glacier-specific (e.g. avalanches, topographical shading, cloudiness…).
the sensitivity parameter \(\mu ^{*}\) will be affected by uncertainties and systematic biases in the input climate data.
As a result, \(\mu ^{*}\) can vary greatly between neighboring glaciers. The calibration procedure introduced by Marzeion et al., (2012) and implemented in OGGM makes full use of these apparent handicaps by turning them into assets.
The calibration procedure starts with glaciers for which we have direct observations of the annual specific mass-balance SMB. We use the WGMS FoG (shipped with OGGM) for this purpose.
For each of these glaciers, time-dependent “candidate” temperature sensitivities \(\mu (t)\) are estimated by requiring that the average specific mass-balance \(B_{31}\) is equal to zero. \(B_{31}\) is computed for a 31 yr period centered around the year \(t\) and for a constant glacier geometry fixed at the RGI date (e.g. 2003 for most glaciers in the European Alps).
In [2]: example_plot_mu_ts() # the code for these examples is posted below

Around 1900, the climate was cold and wet. As a consequence, the temperature sensitivity required to maintain the 2003 glacier geometry is high. Inversely, the recent climate is warm and the glacier must have a small temperature sensitivity in order to preserve its geometry.
Note that these \(\mu (t)\) are just hypothetical sensitivities necessary to maintain the glacier in equilibrium in an average climate at the year \(t\). We call them “candidates”, since one (or more) of them is likely to be close to the “real” sensitivity of the glacier.
This is when the mass-balance observations come into play: each of these candidates can be used to compute the mass-balance during the period were we have observations. We then compare the model output with the expected mass-balance and compute the model bias:
In [3]: example_plot_bias_ts() # the code for these examples is posted below

The bias is positive when \(\mu\) is too low, and negative when \(\mu\) is too high. Here, the bias crosses the zero line twice. All dates correspond to approximately the same \(\mu\) (but not exactly, as precipitation and temperature both have an influence on it). These dates at which the \(\mu\) candidates are close to the real \(\mu\) are called \(t^*\) (the associated sensitivities \(\mu (t^*)\) are called \(\mu^*\)). For the next step, one \(t^*\) is sufficient: we pick the one which corresponds to the smallest absolute bias.
At the glaciers where observations are available, this detour via the \(\mu\) candidates is not necessary to find the correct \(\mu^*\). Indeed, the goal of these computations are in fact to find \(t^*\), which is the actual value to be interpolated to glaciers where no observations are available.
The benefit of this approach is best shown with the results of a cross-validation study realized by Marzeion et al., (2012) (and confirmed by OGGM):

Benefit of spatially interpolating \(t^{*}\) instead of \(\mu ^{*}\) as shown by leave-one-glacier-out cross-validation (N = 255). Left: error distribution of the computed mass-balance if determined by the interpolated \(t^{*}\). Right: error distribution of the mass-balance if determined by interpolation of \(\mu ^{*}\).¶
This substantial improvement in model performance is due to several factors:
the equilibrium constraint applied on \(\mu\) implies that the sensitivity cannot vary much during the last century. In fact, \(\mu\) at one glacier varies far less in one century than between neighboring glaciers, because of all the factors mentioned above. In particular, it will vary comparatively little around a given year \(t\) : errors in \(t^*\) (even large) will result in small errors in \(\mu^*\).
the equilibrium constraint will also imply that systematic biases in temperature and precipitation (no matter how large) will automatically be compensated by all \(\mu (t)\), and therefore also by \(\mu^*\). In that sense, the calibration procedure can be seen as a empirically driven downscaling strategy: if a glacier is here, then the local climate (or the glacier temperature sensitivity) must allow a glacier to be there. For example, the effect of avalanches or a negative bias in precipitation input will have the same impact on calibration: \(\mu^*\) should be reduced to take these effects into account, even though they are not resolved by the mass-balance model.
The most important drawback of this calibration method is that it assumes that two neighboring glaciers should have a similar \(t^*\). This is not necessarily the case, as other factors than climate (such as the glacier size) will influence \(t^*\) too. Our results (and the arguments listed above) show however that this is an approximation we can cope with.
In a final note, it is important to mention that the \(\mu^*\) and \(t^*\) should not be over-interpreted in terms of “real” temperature sensitivities or “real” response time of the glacier. This procedure is primarily a calibration method, and as such it can be statistically scrutinized (for example with cross-validation). It can also be noted that the MB observations play a relatively minor role in the calibration: they could be entirely avoided by fixing a \(t^*\) for all glaciers in a region (or even worldwide). The resulting changes in calibrated \(\mu^*\) will be comparatively small (again, because of the local constraints on \(\mu\)). The MB observations, however, play a major role for the assessment of model uncertainty.
References¶
- Harris_etal_2010
Harris, I., Jones, P. D., Osborn, T. J., & Lister, D. H. (2014). Updated high-resolution grids of monthly climatic observations - the CRU TS3.10 Dataset. International Journal of Climatology, 34(3), 623–642. https://doi.org/10.1002/joc.3711
Implementation details¶
If you had the courage to read until here, it means that you have concrete questions about the implementation of the mass-balance model in OGGM. Here are some more details:
the mass-balance in OGGM is computed from the altitudes and widths of the flowlines grid points (see Glacier flowlines). The easiest way to let OGGM compute the mass-balance for you is to use the
core.massbalance.PastMassBalance
.the interpolation of \(t^*\) is done with an inverse distance weighting algorithm (see
tasks.local_t_star()
)if more than one \(t^*\) is found for some reference glaciers, than the glaciers with only one \(t^*\) will determine the most likely \(t^*\) for the other glaciers (see
tasks.compute_ref_t_stars()
)yes, the temperature gradients and the precipitation scaling factor will have an influence on the results, but it is small since any change will automatically be compensated by \(\mu^*\). We are currently quantifying these effects more precisely.
Code used to generate these examples:
import os
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import xarray as xr
import oggm
from oggm import cfg, tasks
from oggm.core.climate import (mb_yearly_climate_on_glacier,
t_star_from_refmb,
local_t_star, mu_star_calibration)
from oggm.core.massbalance import (ConstantMassBalance)
from oggm.utils import get_demo_file, gettempdir
cfg.initialize()
cfg.set_intersects_db(get_demo_file('rgi_intersect_oetztal.shp'))
cfg.PATHS['dem_file'] = get_demo_file('hef_srtm.tif')
base_dir = gettempdir('Climate_docs')
entity = gpd.read_file(get_demo_file('HEF_MajDivide.shp')).iloc[0]
gdir = oggm.GlacierDirectory(entity, base_dir=base_dir)
tasks.define_glacier_region(gdir, entity=entity)
tasks.glacier_masks(gdir)
tasks.compute_centerlines(gdir)
tasks.initialize_flowlines(gdir)
tasks.compute_downstream_line(gdir)
tasks.catchment_area(gdir)
tasks.catchment_width_geom(gdir)
tasks.catchment_width_correction(gdir)
data_dir = get_demo_file('HISTALP_precipitation_all_abs_1801-2014.nc')
cfg.PATHS['cru_dir'] = os.path.dirname(data_dir)
cfg.PARAMS['baseline_climate'] = 'HISTALP'
cfg.PARAMS['baseline_y0'] = 1850
tasks.process_histalp_data(gdir)
tasks.glacier_mu_candidates(gdir)
mbdf = gdir.get_ref_mb_data()
res = t_star_from_refmb(gdir, mbdf=mbdf.ANNUAL_BALANCE)
local_t_star(gdir, tstar=res['t_star'], bias=res['bias'], reset=True)
mu_star_calibration(gdir, reset=True)
# For flux plot
tasks.prepare_for_inversion(gdir, add_debug_var=True)
# For plots
mu_yr_clim = gdir.read_pickle('climate_info')['mu_candidates_glacierwide']
years, temp_yr, prcp_yr = mb_yearly_climate_on_glacier(gdir)
# which years to look at
selind = np.searchsorted(years, mbdf.index)
temp_yr = np.mean(temp_yr[selind])
prcp_yr = np.mean(prcp_yr[selind])
# Average oberved mass-balance
ref_mb = mbdf.ANNUAL_BALANCE.mean()
mb_per_mu = prcp_yr - mu_yr_clim * temp_yr
# Diff to reference
diff = mb_per_mu - ref_mb
pdf = pd.DataFrame()
pdf[r'$\mu (t)$'] = mu_yr_clim
pdf['bias'] = diff
res = t_star_from_refmb(gdir, mbdf=mbdf.ANNUAL_BALANCE)
# For the mass flux
cl = gdir.read_pickle('inversion_input')[-1]
mbmod = ConstantMassBalance(gdir)
mbx = (mbmod.get_annual_mb(cl['hgt']) * cfg.SEC_IN_YEAR *
cfg.PARAMS['ice_density'])
fdf = pd.DataFrame(index=np.arange(len(mbx))*cl['dx'])
fdf['Flux'] = cl['flux']
fdf['Mass balance'] = mbx
# For the distributed thickness
tasks.mass_conservation_inversion(gdir, glen_a=2.4e-24 * 3, fs=0)
tasks.distribute_thickness_per_altitude(gdir)
# plot functions
def example_plot_temp_ts():
d = xr.open_dataset(gdir.get_filepath('climate_monthly'))
temp = d.temp.resample(time='12MS').mean('time').to_series()
del temp.index.name
temp.plot(figsize=(8, 4), label='Annual temp')
tsm = temp.rolling(31, center=True, min_periods=15).mean()
tsm.plot(label='31-yr avg')
plt.legend(loc='best')
plt.title('HISTALP annual temperature, Hintereisferner')
plt.ylabel(r'degC')
plt.tight_layout()
plt.show()
def example_plot_mu_ts():
mu_yr_clim.plot(figsize=(8, 4), label=r'$\mu (t)$')
plt.legend(loc='best')
plt.title(r'$\mu$ candidates Hintereisferner')
plt.ylabel(r'$\mu$ (mm yr$^{-1}$ K$^{-1}$)')
plt.tight_layout()
plt.show()
def example_plot_bias_ts():
ax = pdf.plot(figsize=(8, 4), secondary_y='bias')
plt.hlines(0, 1800, 2015, linestyles='-')
ax.set_ylabel(r'$\mu$ (mm yr$^{-1}$ K$^{-1}$)')
ax.set_title(r'$\mu$ candidates HEF')
plt.ylabel(r'bias (mm yr$^{-1}$)')
yl = plt.gca().get_ylim()
plt.plot((res['t_star'], res['t_star']), (yl[0], 0),
linestyle=':', color='grey')
plt.ylim(yl)
plt.tight_layout()
plt.show()
def example_plot_massflux():
fig, ax = plt.subplots(figsize=(8, 4))
fdf.plot(ax=ax, secondary_y='Mass balance', style=['C1-', 'C0-'])
plt.axhline(0., color='grey', linestyle=':')
ax.set_ylabel('Flux [m$^3$ s$^{-1}$]')
ax.right_ax.set_ylabel('MB [kg m$^{-2}$ yr$^{-1}$]')
ax.set_xlabel('Distance along flowline (m)')
plt.title('Mass flux and mass balance along flowline')
plt.tight_layout()
plt.show()
Ice dynamics¶
The glaciers in OGGM are represented by a depth integrated flowline model. The equations for the isothermal shallow ice are solved along the glacier centerline, computed to represent best the flow of ice along the glacier (see for example antarcticglaciers.org for a general introduction about the various type of glacier models).
Ice flow¶
Let \(S\) be the area of a cross-section perpendicular to the flowline. It has a width \(w\) and a thickness \(h\) and, in this example, a parabolic bed shape.
![]()
Example of a cross-section along the glacier flowline. Background image from http://www.swisseduc.ch/glaciers/alps/hintereisferner/index-de.html¶
Volume conservation for this discrete element implies:
where \(\dot{m}\) is the mass-balance, \(q = u S\) the flux of ice, and \(u\) the depth-integrated ice velocity ([Cuffey_Paterson_2010], p 310). This velocity can be computed from Glen’s flow law as a function of the basal shear stress \(\tau\):
The second term is to account for basal sliding, see e.g. [Oerlemans_1997] or [Golledge_Levy_2011]. It introduces an additional free parameter \(f_s\) and will therefore be ignored in a first approach. The deformation parameter \(f_d\) is better constrained and relates to Glen’s temperature‐dependent creep parameter \(A\):
The basal shear stress \(\tau\) depends e.g. on the geometry of the bed [Cuffey_Paterson_2010]. Currently it is assumed to be equal to the driving stress \(\tau_d\):
where \(\alpha\) is the slope of the flowline and \(\rho\) the density
of ice. Both the FluxBasedModel
and the MUSCLSuperBeeModel
solve
for these equations, but with different numerical schemes.
Bed shapes¶
OGGM implements a number of possible bed-shapes. Currently the shape has no direct influence on the shear stress (i.e. Cuffey and Paterson’s “shape factor” is not considered), but the shape will still have a considerable influence on glacier dynamics:
the width change as a result of mass transport will be different for each shape, thus influencing the mass balance \(w \, \dot{m}\)
with all other things held constant, a change in section area \(\partial S / \partial t\) due to mass convergence/divergence will result in a different \(\partial h / \partial t\) and thus in different shear stress computation at the next time step.
Trapezoidal¶
Trapezoidal shape with two degrees of freedom. The width change with thickness depends on \(\lambda\). [Golledge_Levy_2011] uses \(\lambda = 1\) (a 45° wall angle).
Parabolic¶
Parabolic shape with one degree of freedom, which makes it particularly useful for the bed inversion: if \(S\) and \(w\) are known:
The parabola is defined by the bed-shape parameter \(P_s = 4 h / w^2\) 1. Very small values of this parameter imply very flat shapes, unrealistically sensitive to changes in \(h\). For this reason, the default in OGGM is to use the mixed flowline model.
- 1
the local thickness \(y\) of the parabolic bed can be described by \(y = h − P_s x^2\). At \(x = w / 2\), \(y = 0\) and therefore \(P_s = 4 h / w^2\).
Mixed¶
A combination of rectangular, trapezoidal and parabolic shapes. The default is parabolic, but can be adapted in two cases:
if the glacier section touches an ice-divide or a neighboring tributary catchment outline, the bed is considered to be rectangular;
if the parabolic shape parameter \(P_s\) is below a certain threshold, a trapezoidal shape is used. Indeed, flat parabolas tend to be very sensitive to small changes in \(h\), which is undesired.
Numerics¶
“Flux based” model¶
Most flowline models treat the volume conservation equation as a diffusion problem, taking advantage of the robust numerical solutions available for this type of equations. The problem with this approach is that it implies developing the \(\partial S / \partial t\) term to solve for ice thickness \(h\) directly, thus implying different diffusion equations for different bed geometries (e.g. [Oerlemans_1997] with a trapezoidal bed).
The OGGM “flux based” model solves for the \(\nabla \cdot q\) term (hence the name). The strong advantage of this method is that the numerical equations are the same for any bed shape, considerably simplifying the implementation. Similar to the “diffusion approach”, the model looses mass-conservation in very steep slopes ([Jarosch_etal_2013]).
The numerical scheme implemented in OGGM is tested against A. Jarosch’s MUSCLSuperBee Model (see below) and Hans Oerleman’s diffusion model for various idealized cases. For all cases but the steep slope one the model performs very well.
In order to increase the stability and speed of the computations, we solve the numerical equations on a forward staggered grid and we use an adaptive time stepping scheme.
MUSCLSuperBeeModel¶
A shallow ice model with improved numerics ensuring mass-conservation in very steep walls [Jarosch_etal_2013]. The model is currently in development to account for various bed shapes and tributaries and will likely become the default in OGGM.
Glacier tributaries¶
Glaciers in OGGM have a main centerline and, sometimes, one or more tributaries (which can themselves also have tributaries, see Glacier flowlines). The number of these tributaries depends on many factors, but most of the time the algorithm works well.
The main flowline and its tributaries are all modelled individually. At the end of a time step, the tributaries will transport mass to the branch they are flowing to. Numerically, this mass transport is handled by adding an element at the end of the flowline with the same properties (with, thickness…) as the last grid point, with the difference that the slope \(\alpha\) is computed with respect to the altitude of the point they are flowing to. The ice flux is then computed as usual and transferred to the downlying branch.
The computation of the ice flux is always done first from the lowest order branches (without tributaries) to the highest ones, ensuring a correct mass-redistribution. The use of the slope between the tributary and main branch ensures that the former is not dynamical coupled to the latter. If the angle is positive or if no ice is present at the end of the tributary, no mass exchange occurs.
References¶
- Cuffey_Paterson_2010(1,2)
Cuffey, K., and W. S. B. Paterson (2010). The Physics of Glaciers, Butterworth‐Heinemann, Oxford, U.K.
- Farinotti_etal_2009
Farinotti, D., Huss, M., Bauder, A., Funk, M., & Truffer, M. (2009). A method to estimate the ice volume and ice-thickness distribution of alpine glaciers. Journal of Glaciology, 55 (191), 422–430.
- Golledge_Levy_2011(1,2)
Golledge, N. R., and Levy, R. H. (2011). Geometry and dynamics of an East Antarctic Ice Sheet outlet glacier, under past and present climates. Journal of Geophysical Research: Earth Surface, 116(3), 1–13.
- Jarosch_etal_2013(1,2)
Jarosch, a. H., Schoof, C. G., & Anslow, F. S. (2013). Restoring mass conservation to shallow ice flow models over complex terrain. Cryosphere, 7(1), 229–240. http://doi.org/10.5194/tc-7-229-2013
- Oerlemans_1997(1,2)
Oerlemans, J. (1997). A flowline model for Nigardsbreen, Norway: projection of future glacier length based on dynamic calibration with the historic record. Journal of Glaciology, 24, 382–389.
Bed inversion¶
To compute the initial ice thickness \(h_0\), OGGM follows a methodology largely inspired from [Farinotti_etal_2009], but fully automatised and relying on different methods for the mass-balance and the calibration.
Basics¶
The principle is simple. Let’s assume for now that we know the flux of ice \(q\) flowing through a section of our glacier. The flowline physics and geometrical assumptions can be used to solve for the ice thickness \(h\):
With \(n=3\) and \(S = h w\) (in the case of a rectangular section) or \(S = 2 / 3 h w\) (parabolic section), the equation reduces to solving a polynomial of degree 5 with one unique solution in \(\mathbb{R}_+\). If we neglect sliding (the default in OGGM and in [Farinotti_etal_2009]), the solution is even simpler.
Ice flux¶
If we consider a point on the flowline and the catchment area \(\Omega\) upstream of this point we have:
with \(\dot{m}\) the mass balance, and \(\widetilde{m} = \dot{m} - \rho \partial h / \partial t\) the “apparent mass-balance” after [Farinotti_etal_2009]. If the glacier is in steady state, the apparent mass-balance is equivalent to the the actual (and observable) mass-balance. Unfortunately, \(\partial h / \partial t\) is not known and there is no easy way to compute it. In order to continue, we have to make the assumption that our geometry is in equilibrium.
This however has a very useful consequence: indeed, for the calibration of our Mass-balance model it is required to find a date \(t^*\) at which the glacier would be in equilibrium with its average climate while conserving its modern geometry. Thus, we have \(\widetilde{m} = \dot{m}_{t^*}\), where \(\dot{m}_{t^*}\) is the 31-yr average mass-balance centered at \(t^*\) (which is known since the mass-balance model calibration).
The plot below shows the mass flux along the major flowline of Hintereisferner glacier. By construction, the flux is maximal at the equilibrium line and zero at the glacier tongue.
In [1]: example_plot_massflux()

Calibration¶
A number of climate and glacier related parameters are fixed prior to the inversion, leaving only one free parameter for the calibration of the bed inversion procedure: the inversion factor \(f_{inv}\). It is defined such as:
With \(A_0\) the standard creep parameter (\(2.4^{-24}\)). Currently, there is no “optimum” \(f_{inv}\) parameter in the model. There is a high uncertainty in the “true” \(A\) parameter as well as in all other processes affecting the ice thickness. Therefore, we cannot make any recommendation for the “best” parameter. Global sensitivity analyses show that the default value is a good compromise (Maussion et al., 2018)
Note: for ITMIX, \(f_{inv}\) was set to a value of approximately 3 (which was too high and underestimated ice thickness in most cases with the exception of the European Alps).
Distributed ice thickness¶
To obtain a 2D map of the glacier bed, the flowline thicknesses need to be interpolated to the glacier mask. The current implementation of this step in OGGM is currently very simple, but provides nice looking maps:
In [2]: tasks.catchment_area(gdir)
In [3]: graphics.plot_distributed_thickness(gdir)

Using OGGM¶
How to use the model, with concrete python code examples.
Installing OGGM¶
OGGM itself is a pure Python package, but it has several dependencies which are not trivial to install. The instructions below provide all the required detail and should work on any platform.
OGGM is fully tested with Python version 3.6 on Linux, and all versions above 3.4 should work as well. OGGM doesn’t work with Python 2.7.
OGGM itself should also work on Mac OS and Windows platforms, but we make no guarantee that our dependencies do.
Note
Complete beginners should get familiar with Python and its packaging ecosystem before trying to install and run OGGM.
For most users we recommend following the steps here to install Python and the package dependencies with the conda package manager. Linux or Debian users and people with experience with pip can follow the specific instructions here to install with virtualenv.
Dependencies¶
- Standard SciPy stack:
numpy
scipy
scikit-image
pillow
matplotlib
pandas
xarray
joblib
- Configuration file parsing tool:
configobj
- I/O:
netcdf4
- GIS tools:
gdal
shapely
pyproj
rasterio
geopandas
- Testing:
pytest
- Other libraries:
boto3
- Optional:
progressbar2 (displays the download progress)
bottleneck (speeds up xarray operations)
dask (works nicely with xarray)
python-colorspace (applies HCL-based color palettes to some graphics)
Install with conda (all platforms)¶
This is the recommended way to install OGGM.
Prerequisites¶
You should have a recent version of the conda package manager. You can get conda by installing miniconda (the package manager alone - recommended) or anaconda (the full suite - with many packages you won’t need).
Conda environment¶
We recommend to create a specific environment for OGGM. In a terminal window, type:
conda create --name oggm_env python=3.X
where 3.X
is the Python version shipped with conda (currently 3.6).
You can of course use any other name for your environment.
Don’t forget to activate it before going on:
source activate oggm_env
(on Windows: activate oggm_env
)
Dependencies¶
Install all OGGM dependencies from the conda-forge
and oggm
conda channels:
conda install -c oggm -c conda-forge oggm-deps
The oggm-deps
package is a “meta package”. It does not contain any code but
will install all the packages OGGM needs automatically.
Warning
The conda-forge channel ensures that the complex package dependencies are handled correctly. Subsequent installations or upgrades from the default conda channel might brake the chain. We strongly recommend to always use the the conda-forge channel for your installation.
You might consider setting conda-forge (and oggm
) as your
default channels:
conda config --add channels conda-forge
conda config --add channels oggm
No scientific Python installation is complete without installing IPython and Jupyter:
conda install -c conda-forge ipython jupyter
Install OGGM itself¶
First, choose which version of OGGM you would like to install:
stable: this is the latest version officially released and has a fixed version number (e.g. v1.1).
dev: this is the development version. It might contain new features and bug fixes, but is also likely to continue to change until a new release is made. This is the recommended way if you want to use the latest changes to the code.
dev+code: this is the recommended way if you plan to explore the OGGM codebase, contribute to the model, and/or if you want to use the most recent model updates.
‣ install the stable version:
If you are using conda, you can install stable OGGM as a normal conda package:
conda install -c oggm oggm
If you are using pip, you can install OGGM from PyPI:
pip install oggm
‣ install the dev version:
For this to work you’ll need to have the git software installed on your system. In your conda environmnent, simply do:
pip install --upgrade git+https://github.com/OGGM/oggm.git
With this command you can also update an already installed OGGM version to the latest version.
‣ install the dev version + get access to the OGGM code:
For this to work you’ll need to have the git software installed on your system. Then, clone the latest repository version:
git clone https://github.com/OGGM/oggm.git
Then go to the project root directory:
cd oggm
And install OGGM in development mode (this is valid for both pip and conda environments):
pip install -e .
Note
Installing OGGM in development mode means that subsequent changes to this
code repository will be taken into account the next time you will
import oggm
. You can also update OGGM with a simple git pull from
the root of the cloned repository.
Testing OGGM¶
You can test your OGGM installation by running the following command from anywhere (don’t forget to activate your environment first):
pytest --pyargs oggm
The tests can run for a couple of minutes. If everything worked fine, you should see something like:
=============================== test session starts ===============================
platform linux -- Python 3.5.2, pytest-3.3.1, py-1.5.2, pluggy-0.6.0
Matplotlib: 2.1.1
Freetype: 2.6.1
rootdir:
plugins: mpl-0.9
collected 164 items
oggm/tests/test_benchmarks.py ... [ 1%]
oggm/tests/test_graphics.py ................... [ 13%]
oggm/tests/test_models.py ................sss.ss.....sssssss [ 34%]
oggm/tests/test_numerics.py .ssssssssssssssss [ 44%]
oggm/tests/test_prepro.py .......s........................s..s....... [ 70%]
oggm/tests/test_utils.py .....................sss.s.sss.sssss..ss. [ 95%]
oggm/tests/test_workflow.py sssssss [100%]
==================== 112 passed, 52 skipped in 187.35 seconds =====================
You can safely ignore deprecation warnings and other messages (if any), as long as the tests end without errors.
This runs a minimal suite of tests. If you want to run the entire test suite (including graphics and slow running tests), type:
pytest --pyargs oggm --run-slow --mpl
Congrats, you are now set-up for the Getting started section!
Install with virtualenv (Linux/Debian)¶
Note
The installation with virtualenv and pip requires a few more steps than with conda. Unless you have a good reason to install by this route, installing with conda is probably what you want to do.
The instructions below have been tested on Debian / Ubuntu / Mint systems only!
Linux packages¶
Run the following commands to install required packages.
For the build:
$ sudo apt-get install build-essential python-pip liblapack-dev gfortran libproj-dev python-setuptools
For matplolib:
$ sudo apt-get install tk-dev python3-tk python3-dev
For GDAL:
$ sudo apt-get install gdal-bin libgdal-dev python-gdal
For NetCDF:
$ sudo apt-get install netcdf-bin ncview python-netcdf4
Virtual environment¶
Next follow these steps to set up a virtual environment.
Install extensions to virtualenv:
$ sudo apt-get install virtualenvwrapper
Reload your profile:
$ source /etc/profile
Make a new environment, for example called oggm_env
, with Python 3:
$ mkvirtualenv oggm_env -p /usr/bin/python3
(further details can be found for example in this tutorial)
Python packages¶
Be sure to be on the working environment:
$ workon oggm_env
Update pip (important!):
$ pip install --upgrade pip
Install some packages one by one:
$ pip install numpy scipy pandas shapely matplotlib
Installing GDAL is not so straightforward. First, check which version of GDAL is installed on your Linux system:
$ gdal-config --version
The package version (e.g. 2.2.0
, 2.3.1
, …) should match
that of the Python package you want to install. For example, if the Linux
GDAL version is 2.2.0
, install the latest corresponding Python version.
The following command works on any system and automatically gets the right version:
$ pip install gdal=="$(gdal-config --version)" --install-option="build_ext" --install-option="$(gdal-config --cflags | sed 's/-I/--include-dirs=/')"
Fiona also builds upon GDAL, so let’s compile it the same way:
$ pip install fiona --install-option="build_ext" --install-option="$(gdal-config --cflags | sed 's/-I/--include-dirs=/')"
(Details can be found in this blog post.)
Now install further dependencies:
$ pip install pyproj rasterio Pillow geopandas netcdf4 scikit-image configobj joblib xarray boto3 progressbar2 pytest motionless dask bottleneck
Finally, install the salem and python-colorspace libraries:
$ pip install git+https://github.com/fmaussion/salem.git
$ pip install git+https://github.com/retostauffer/python-colorspace.git
OGGM and tests¶
Refer to Install OGGM itself above.
Getting started¶
Although largely automatised, the OGGM model still requires some python scripting to prepare and run a simulation. This documentation will guide you through several examples to get you started.
First step: system settings for input data¶
OGGM will automatically download all the data it needs for a simulation at run time. You can specify where on your computer these files should be stored for later use. Let’s start by opening a python interpreter and type in:
In [1]: from oggm import cfg
In [2]: cfg.initialize()
At your very first import, this will do two things:
It will download a small subset of data used for testing and calibration. This data is located in your home directory, in a hidden folder called .oggm.
It will create a configuration file in your home folder, where you can indicate where you want to store further input data. This configuration file is also located in your home directory under the name
.oggm_config
.
To locate this config file, you can type:
In [3]: cfg.CONFIG_FILE
Out[3]: '/home/docs/.oggm_config'
See Input data for an explanation of these entries.
Important
The default settings will probably work for you, but we recommend to have a look at this file and set the paths to a directory where enough space is available: a minimum of 8 Gb for all climate data and glacier outlines is necessary. Topography data can quickly grow to several Gb as well, even for regional runs.
OGGM workflow¶
For a step by step tutorial of the entire OGGM workflow, download and run
the
getting started
jupyter notebook (right-click -> “Save link as”).
Alternatively, you can try OGGM directly in your browser without having to install anything! Follow this link or click on the button below:
OGGM run scripts¶
Refer to Set-up an OGGM run for real-world applications.
Input data¶
OGGM needs various data files to run. Currently, we rely exclusively on open-access data that are all downloaded automatically for the user. This page explains the various ways OGGM uses to get to the data it needs.
First, you will have to set-up your System settings (you’ll need to do this only once per computer). Then, we recommend to start your runs from Pre-processed directories: these are ready-to-run Glacier directories at various levels of pre-processing state, thus reducing the amount of pre-processing you’ll have to do yourself. It is also possible to do a full run “from scratch”, in which case OGGM will download the Raw data sources for you as well.
System settings¶
OGGM implements a bunch of tools to make access to the input data as painless as possible for you, including the automated download of all the required files. This requires you to tell OGGM where to store these data.
Calibration data and testing: the ~/.oggm
directory¶
At the first import, OGGM will create a cached .oggm
directory in your
$HOME
folder. This directory contains all data obtained from the
oggm sample data repository. It contains several files needed only for
testing, but also some important files needed for calibration and validation.
For example:
The CRU baseline climatology (CL v2.0, obtained from crudata.uea.ac.uk/ and prepared for OGGM),
The reference mass-balance data from WGMS with links to the respective RGI polygons,
The reference ice thickness data from WGMS (GlaThiDa database).
The ~/.oggm
directory should be updated automatically when you update OGGM,
but if you encounter any problems with it, simply delete the directory (it will
be re-downloaded automatically at the next import).
All other data: auto-downloads and the ~/.oggm_config
file¶
Unlike runtime parameters (such as physical constants or working directories),
the input data is shared accross runs and even accross computers if you want
to. Therefore, the paths to previously downloaded data are stored in a
configuration file that you’ll find in your $HOME
folder:
the ~/.oggm_config
file.
The file should look like:
dl_cache_dir = /path/to/download_cache
dl_cache_readonly = False
tmp_dir = /path/to/tmp_dir
cru_dir = /path/to/cru_dir
rgi_dir = /path/to/rgi_dir
test_dir = /path/to/test_dir
has_internet = True
Some explanations:
dl_cache_dir
is a path to a directory where all the files you downloaded will be cached for later use. Most of the users won’t need to explore this folder (it is organized as a list of urls) but you have to make sure to set this path to a folder with sufficient disk space available. This folder can be shared across computers if needed. Once a file is stored in this cache folder, OGGM won’t download it again.dl_cache_readonly
indicates if writing is allowed in this folder (this is the default). Setting this toTrue
will prevent any further download in this directory (useful for cluster environments, where this data might be available on a readonly folder).tmp_dir
is a path to OGGM’s temporary directory. Most of the topography files used by OGGM are downloaded and cached in a compressed (zip) format. They will be extracted intmp_dir
before use. OGGM will never allow more than 100.tif
files to exist in this directory by deleting the oldest ones following the rule of the Least Recently Used (LRU) item. Nevertheless, this directory might still grow to quite a large size. Simply delete it if you want to get this space back.cru_dir
is the location where the CRU climate files are extracted if needed. They are quite large! (approx. 6Gb)rgi_dir
is the location where the RGI shapefiles are extracted.test_dir
is the location where OGGM will write some of its output during tests. It can be set totmp_dir
if you want to, but it can also be another directory (for example a fast SSD disk). This folder shouldn’t become too large but here again, don’t hesitate to delete it if you need to.
Note
For advanced users or cluster configuration:
tmp_dir
, cru_dir
and rgi_dir
can be overridden and set to a
specific directory by defining an environment variable OGGM_EXTRACT_DIR
to a directory path. Similarly, the environment variables
OGGM_DOWNLOAD_CACHE
and OGGM_DOWNLOAD_CACHE_RO
override the
dl_cache_dir
and dl_cache_readonly
settings.
Pre-processed directories¶
The simplest way to run OGGM is to rely on Glacier directories which have been prepared for you by the OGGM developers. Depending on your use case, you can start from various levels of pre-processing, and various map sizes.
All these directories have been generated with the default parameters of the current stable OGGM version. If you want to change these parameters, you’ll have to do a full run from scratch using the Raw data sources.
To start from a pre-processed state, simply use the
workflow.init_glacier_regions()
function with the
from_prepro_level
and prepro_border
keyword arguments set to the
values of your choice.
Processing levels¶
Currently, there are five available levels of pre-processing:
Level 1: the lowest level, with directories containing the glacier topography and glacier outlines only.
Level 2, adding the baseline climate timeseries (CRU, see below) to this folder.
Level 3, adding the output of all necessary pre-processing tasks for a dynamical run, including the bed inversion using the default parameters.
Level 4, same as level 3 but with all intermediate ouptut files removed. The strong advantage of level 4 files is that their size is considerably reduced, at the cost that certain operations (like plotting on maps or running the bed inversion algorithm again) are not possible.
In practice, most users are going to use level 3 or level 4 files, with some use cases relying on lower levels.
Map size¶
The size of the local glacier map is given in number of grid points outside the glacier boundaries. The larger the map, the largest the glacier can become. Therefore, user should choose the map border parameter depending on the expected glacier growth in their simulations: for most cases, a border value of 80 or 160 should be enough.
Here is an example with the Hintereisferner in the Alps:
In [1]: f, axs = plt.subplots(2, 2, figsize=(8, 6))
In [2]: for ax, border in zip(np.array(axs).flatten(), [10, 80, 160, 250]):
...: gdir = workflow.init_glacier_regions('RGI60-11.00897',
...: from_prepro_level=1,
...: prepro_border=border)
...: graphics.plot_domain(gdir, ax=ax, title='Border: {}'.format(border),
...: add_colorbar=False,
...: lonlat_contours_kwargs={'add_tick_labels':False})
...:
In [3]: plt.tight_layout(); plt.show()

For runs into the Little Ice Age, a border value of 160 is more than enough. For simulations into the 21st century, a border value of 80 is sufficient.
Users should be aware that the amount of data to download isn’t small, especially for full directories at levels 3. Here is an indicative table for the total amount of data for all 18 RGI regions (excluding Antarctica):
Level |
B 10 |
B 80 |
B 160 |
B 250 |
---|---|---|---|---|
L1 |
2.4G |
11G |
29G |
63G |
L2 |
5.1G |
14G |
32G |
65G |
L3 |
13G |
44G |
115G |
244G |
L4 |
4.2G |
4.5G |
4.8G |
5.3G |
Certain regions are much smaller than others of course. As an indication, with prepro level 3 and a map border of 160, the Alps are 2G large, Greenland 11G, and Iceland 334M.
Therefore, it is recommended to always pick the smallest border value suitable for your research question, and to start your runs from level 4 if possible.
Note
The data download of the preprocessed directories will occur one single time
only: after the first download, the data will be cached in OGGM’s
dl_cache_dir
folder (see above).
Raw data sources¶
These data are used to create the pre-processed directories explained above. If you want to run your own workflow from A to Z, or if you would like to know which data are used in OGGM, read further!
Glacier outlines and intersects¶
Glacier outlines are obtained from the Randolph Glacier Inventory (RGI). We recommend to download them right away by opening a python interpreter and type:
from oggm import utils
utils.get_rgi_intersects_dir()
utils.get_rgi_dir()
The RGI folders should now contain the glacier outlines in the shapefile format, a format widely used in GIS applications. These files can be read by several softwares (e.g. qgis), and OGGM can read them too.
The “RGI Intersects” shapefiles contain the locations of the ice divides (intersections between neighboring glaciers). OGGM can make use of them to determine which bed shape should be used (rectangular or parabolic). See the rgi tools documentation for more information about the intersects.
Topography data¶
When creating a Glacier directories a suitable topographical data source is chosen automatically, depending on the glacier’s location. Currently we use:
the Shuttle Radar Topography Mission (SRTM) 90m Digital Elevation Database v4.1 for all locations in the [60°S; 60°N] range
the Greenland Mapping Project (GIMP) Digital Elevation Model for mountain glaciers in Greenland (RGI region 05)
the Radarsat Antarctic Mapping Project (RAMP) Digital Elevation Model, Version 2 for mountain glaciers in the Antarctic continent (RGI region 19 with the exception of the peripheral islands)
the Viewfinder Panoramas DEM3 products elsewhere (most notably: North America, Russia, Iceland, Svalbard)
These data are downloaded only when needed (i.e.: during an OGGM run)
and they are stored in the dl_cache_dir
directory. The gridded topography is then reprojected and resampled to the local
glacier map. The local grid is defined on a Transverse Mercator projection centered over
the glacier, and has a spatial resolution depending on the glacier size. The
default in OGGM is to use the following rule:
where \(\Delta x\) is the grid spatial resolution (in m), \(S\) the glacier area (in km\(^{2}\)) and \(d_1\), \(d_2\) some parameters (set to 14 and 10, respectively). If the chosen spatial resolution is larger than 200 m (\(S \ge\) 185 km\(^{2}\)) we clip it to this value.
Important: when using these data sources for your OGGM runs, please refer
to the original data provider of the data! OGGM will add a dem_source.txt
file in each glacier directory specifying how to cite these data. We
reproduce this information here:
- SRTM V4
Jarvis A., H.I. Reuter, A. Nelson, E. Guevara, 2008, Hole-filled seamless SRTM data V4, International Centre for Tropical Agriculture (CIAT), available from http://srtm.csi.cgiar.org.
- RAMP V2
Liu, H., K. C. Jezek, B. Li, and Z. Zhao. 2015. Radarsat Antarctic Mapping Project Digital Elevation Model, Version 2. Boulder, Colorado USA. NASA National Snow and Ice Data Center Distributed Active Archive Center. doi: https://doi.org/10.5067/8JKNEW6BFRVD.
- GIMP V1.1
Howat, I., A. Negrete, and B. Smith. 2014. The Greenland Ice Mapping Project (GIMP) land classification and surface elevation data sets, The Cryosphere. 8. 1509-1518. https://doi.org/10.5194/tc-8-1509-2014
- VIEWFINDER PANORAMAS DEMs
There is no recommended citation for these data. Please refer to the website above in case of doubt.
Warning
A number of glaciers will still suffer from poor topographic information. Either the errors are large or obvious (in which case the model won’t run), or they are left unnoticed. The importance of reliable topographic data for global glacier modelling cannot be emphasized enough, and it is a pity that no consistent, global DEM is yet available for scientific use.
Climate data¶
The MB model implemented in OGGM needs monthly time series of temperature and precipitation. The current default is to download and use the CRU TS data provided by the Climatic Research Unit of the University of East Anglia.
‣ CRU (default)
If not specified otherwise, OGGM will automatically download and unpack the latest dataset from the CRU servers. We recommend to do this before your first run. In a python interpreter, type:
from oggm import utils
utils.get_cru_file(var='tmp')
utils.get_cru_file(var='pre')
Warning
While the downloaded zip files are ~370mb in size, they are ~5.6Gb large after decompression!
The raw, coarse (0.5°) dataset is then downscaled to a higher resolution grid (CRU CL v2.0 at 10’ resolution) following the anomaly mapping approach described by Tim Mitchell in his CRU faq (Q25). Note that we don’t expect this downscaling to add any new information than already available at the original resolution, but this allows us to have an elevation-dependent dataset based on a presumably better climatology. The monthly anomalies are computed following Harris et al., (2010): we use standard anomalies for temperature and scaled (fractional) anomalies for precipitation. At the locations where the monthly precipitation climatology is 0 we fall back to the standard anomalies.
When using these data, please refer to the original provider:
Harris, I., Jones, P. D., Osborn, T. J., & Lister, D. H. (2014). Updated high-resolution grids of monthly climatic observations - the CRU TS3.10 Dataset. International Journal of Climatology, 34(3), 623–642. https://doi.org/10.1002/joc.3711
‣ User-provided dataset
You can provide any other dataset to OGGM by setting the climate_file
parameter in params.cfg
. See the HISTALP data file in the sample-data
folder for an example.
‣ GCM data
OGGM can also use climate model output to drive the mass-balance model. In this case we still rely on gridded observations (CRU) for the baseline climatology and apply the GCM anomalies computed from a preselected reference period. This method is sometimes called the delta method.
Currently we can process data from the
CESM Last Millenium Ensemble
project (see tasks.process_cesm_data()
), but adding other models
should be relatively easy.
Mass-balance data¶
In-situ mass-balance data is used by OGGM to calibrate and validate the mass-balance model. We reliy on mass-balance observations provided by the World Glacier Monitoring Service (WGMS). The Fluctuations of Glaciers (FoG) database contains annual mass-balance values for several hundreds of glaciers worldwide. We exclude water-terminating glaciers and the time series with less than five years of data. Since 2017, the WGMS provides a lookup table linking the RGI and the WGMS databases. We updated this list for version 6 of the RGI, leaving us with 254 mass balance time series. These are not equally reparted over the globe:

Map of the RGI regions; the red dots indicate the glacier locations and the blue circles the location of the 254 reference WGMS glaciers used by the OGGM calibration. From our GMD paper.¶
These data are shipped automatically with OGGM. All reference glaciers have access to the timeseries through the glacier directory:
In [4]: gdir = workflow.init_glacier_regions('RGI60-11.00897', from_prepro_level=4,
...: prepro_border=10)[0]
...:
In [5]: mb = gdir.get_ref_mb_data()
In [6]: mb[['ANNUAL_BALANCE']].plot(title='WGMS data: Hintereisferner')
Out[6]: <matplotlib.axes._subplots.AxesSubplot at 0x7f85953fc5c0>

Parallel computations¶
OGGM is designed to use the available resources as well as possible. For single nodes machines but with more than one processor (frequent case for personal computers) OGGM ships with a multiprocessing approach which is fairly simple to use. For cluster environments, use MPI.
Multiprocessing¶
Most OGGM computations are embarrassingly parallel: they are standalone operations to be realized on one single glacier entity and therefore independent from each other (they are called entity tasks, as opposed to the non-parallelizable global tasks).
When given a list of Glacier directories on which to apply a given task,
the workflow.execute_entity_task()
will distribute the operations on
the available processors using Python’s multiprocessing module.
You can control this behavior with the use_multiprocessing
config
parameter and the number of processors with mp_processes
.
The default in OGGM is:
In [1]: from oggm import cfg
In [2]: cfg.initialize()
In [3]: cfg.PARAMS['use_multiprocessing'] # whether to use multiprocessing
Out[3]: True
In [4]: cfg.PARAMS['mp_processes'] # number of processors to use
Out[4]: -1
-1
means that all available processors will be used.
MPI¶
OGGM can be run in a clustered environment, using standard mpi features. OGGM depends on mpi4py in that case, which can be installed via either conda:
conda install -c conda-forge mpi4py
or pip:
pip install mpi4py
mpi4py itself depends on a working mpi environment, which is usually supplied by the maintainers of your cluster. On conda, it comes with its own copy of mpich, which is nice and easy for quick testing, but likely undesireable for the performance of actual runs.
For an actual run, invoke any script using oggm via mpiexec, and pass the --mpi
parameter to the script itself:
mpiexec -n 10 python ./run_rgi_region.py --mpi
Be aware that the first process with rank 0 is the manager process, that by itself does not do any calculations and is only used to distribute tasks. So the actual number of working processes is one lower than the number passed to mpiexec/your clusters scheduler.
Set-up an OGGM run¶
These examples will show you some example scripts to realise regional or global runs with OGGM. The examples can be run on a laptop within a few minutes. The mass balance calibration example requires more resources but with a reasonably powerful personal computer this should be OK.
For region-wide or global simulations, we recommend to use a cluster environment. OGGM should work well on cloud computing services like Amazon or Google Cloud, and we are in the process of testing such a deployment.
Before you start, make sure you had a look at the Input data section.
1. Equilibrium runs on a subset of an RGI Region¶
This example shows how to run the OGGM glacier evolution model on a subset of the Alps (the Rofental catchment in the Austrian Alps).
The first part of the script is related to setting up the run, then to select certain glaciers out of the RGI file. If you are not familiar with RGI files, you might want to do our short tutorial first:
Then, we download the Pre-processed directories for this run. We use the level 4 data, which contain enough files to start a dynamical run, but not much more!
After the download, starting the runs take only one command. We show an example with a temperature bias as well, illustrating the sensitivity of these glaciers to temperature change.
Script¶
# Python imports
import logging
# Libs
import geopandas as gpd
import shapely.geometry as shpg
# Locals
import oggm.cfg as cfg
from oggm import utils, workflow, tasks
# For timing the run
import time
start = time.time()
# Module logger
log = logging.getLogger(__name__)
# Initialize OGGM and set up the default run parameters
cfg.initialize(logging_level='WORKFLOW')
rgi_version = '61'
rgi_region = '11' # Region Central Europe
# Here we override some of the default parameters
# How many grid points around the glacier?
# Make it large if you expect your glaciers to grow large:
# here, 80 is more than enough
cfg.PARAMS['border'] = 80
# Local working directory (where OGGM will write its output)
WORKING_DIR = utils.gettempdir('OGGM_Rofental')
utils.mkdir(WORKING_DIR, reset=True)
cfg.PATHS['working_dir'] = WORKING_DIR
# RGI file
path = utils.get_rgi_region_file(rgi_region, version=rgi_version)
rgidf = gpd.read_file(path)
# Get the Rofental Basin file
path = utils.get_demo_file('rofental_hydrosheds.shp')
basin = gpd.read_file(path)
# Take all glaciers in the Rofental Basin
in_bas = [basin.geometry.contains(shpg.Point(x, y))[0] for
(x, y) in zip(rgidf.CenLon, rgidf.CenLat)]
rgidf = rgidf.loc[in_bas]
# Sort for more efficient parallel computing
rgidf = rgidf.sort_values('Area', ascending=False)
log.workflow('Starting OGGM run')
log.workflow('Number of glaciers: {}'.format(len(rgidf)))
# Go - get the pre-processed glacier directories
gdirs = workflow.init_glacier_regions(rgidf, from_prepro_level=4)
# We can step directly to a new experiment!
# Random climate representative for the recent climate (1985-2015)
# This is a kind of "commitment" run
workflow.execute_entity_task(tasks.run_random_climate, gdirs,
nyears=300, y0=2000, seed=1,
output_filesuffix='_commitment')
# Now we add a positive and a negative bias to the random temperature series
workflow.execute_entity_task(tasks.run_random_climate, gdirs,
nyears=300, y0=2000, seed=2,
temperature_bias=0.5,
output_filesuffix='_bias_p')
workflow.execute_entity_task(tasks.run_random_climate, gdirs,
nyears=300, y0=2000, seed=3,
temperature_bias=-0.5,
output_filesuffix='_bias_m')
# Write the compiled output
utils.compile_glacier_statistics(gdirs)
utils.compile_run_output(gdirs, filesuffix='_commitment')
utils.compile_run_output(gdirs, filesuffix='_bias_p')
utils.compile_run_output(gdirs, filesuffix='_bias_m')
# Log
m, s = divmod(time.time() - start, 60)
h, m = divmod(m, 60)
log.workflow('OGGM is done! Time needed: %d:%02d:%02d' % (h, m, s))
If everything went well, you should see an output similar to:
2019-02-16 17:50:51: oggm.cfg: Using configuration file: /home/mowglie/Documents/git/oggm-fork/oggm/params.cfg
2019-02-16 17:50:52: __main__: Starting OGGM run
2019-02-16 17:50:52: __main__: Number of glaciers: 54
2019-02-16 17:50:52: oggm.workflow: init_glacier_regions from prepro level 4 on 54 glaciers.
2019-02-16 17:50:52: oggm.workflow: Execute entity task gdir_from_prepro on 54 glaciers
2019-02-16 17:50:52: oggm.workflow: Multiprocessing: using all available processors (N=8)
2019-02-16 17:50:54: oggm.workflow: Execute entity task run_random_climate on 54 glaciers
2019-02-16 17:51:44: oggm.workflow: Execute entity task run_random_climate on 54 glaciers
2019-02-16 17:52:36: oggm.workflow: Execute entity task run_random_climate on 54 glaciers
2019-02-16 17:54:11: __main__: OGGM is done! Time needed: 0:03:20
Some analyses¶
The output directory contains many output files, most of them in the individual glacier directories. Most often, users will use the compiled data files, where the most relevant model outputs are stored together:
the
compile_glacier_statistics.csv
file contains various information for each glacier (here the output is limited because we used a preprocessing level of 4 - see other examples for more information).the
run_output_*.nc
files contain the volume, area and length timeseries of each inddividual glacier.
Let’s have a look at them:
# Imports
import os
import xarray as xr
import matplotlib.pyplot as plt
from oggm.utils import get_demo_file, gettempdir
# Local working directory (where OGGM wrote its output)
WORKING_DIR = gettempdir('OGGM_Rofental')
# Read the files using xarray
ds = xr.open_dataset(os.path.join(WORKING_DIR, 'run_output_commitment.nc'))
dsp = xr.open_dataset(os.path.join(WORKING_DIR, 'run_output_bias_p.nc'))
dsm = xr.open_dataset(os.path.join(WORKING_DIR, 'run_output_bias_m.nc'))
# Compute and plot the regional sums
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(9, 4))
# Volume
(ds.volume.sum(dim='rgi_id') * 1e-9).plot(ax=ax1, label='[1985-2015]')
(dsp.volume.sum(dim='rgi_id') * 1e-9).plot(ax=ax1, label='+0.5°C')
(dsm.volume.sum(dim='rgi_id') * 1e-9).plot(ax=ax1, label='-0.5°C')
ax1.legend(loc='best')
# Area
(ds.area.sum(dim='rgi_id') * 1e-6).plot(ax=ax2, label='[1985-2015]')
(dsp.area.sum(dim='rgi_id') * 1e-6).plot(ax=ax2, label='+0.5°C')
(dsm.area.sum(dim='rgi_id') * 1e-6).plot(ax=ax2, label='-0.5°C')
plt.tight_layout()
# Pick a specific glacier (Hintereisferner)
rid = 'RGI60-11.00897'
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(8, 3))
# Volume
(ds.volume.sel(rgi_id=rid) * 1e-9).plot(ax=ax1, label='[1985-2015]')
(dsp.volume.sel(rgi_id=rid) * 1e-9).plot(ax=ax1, label='+0.5°C')
(dsm.volume.sel(rgi_id=rid) * 1e-9).plot(ax=ax1, label='-0.5°C')
ax1.legend(loc='best')
# Length
(ds.length.sel(rgi_id=rid) * 1e-3).plot(ax=ax2, label='[1985-2015]')
(dsp.length.sel(rgi_id=rid) * 1e-3).plot(ax=ax2, label='+0.5°C')
(dsm.length.sel(rgi_id=rid) * 1e-3).plot(ax=ax2, label='-0.5°C')
plt.tight_layout()
plt.show()
This code snippet should produce the following plots:
The graphics above show the basin sums of glacier volume (km3) and area (km2) in a random climate corresponding to the period [1985-2015] (i.e. each year is picked randomly out of this period). This is a “commitment” run, i.e. showing the expected glacier change even if climate doesn’t change any more in the future. The time axis shows the number of years since the glacier inventory date (here, 2003).
Note
The glacier area and length plots can be noisy in certain conditions because OGGM currently doesn’t differentiate between snow and ice, i.e. occasional years with large snowfall can artificially increase the glacier area. The effect on volume is negligible, though. A good way to deal with this noise is to smooth the time series: see our dedicated tutorial for a possible way to deal with this.
The graphics below are for the Hintereisferner glacier only, and they show glacier volume and glacier length evolution:
2. Ice thickness inversion¶
This example shows how to run the OGGM ice thickness inversion model with various ice parameters: the deformation parameter A and a sliding parameter.
There is currently no “best” set of parameters for the ice thickness inversion model. As shown in Maussion et al. (2018), the default parameter set results in global volume estimates which are a bit larger than previous values. For the consensus estimate of Farinotti et al. (2018), OGGM participated with a deformation parameter 1.5 times larger than the generally accepted default value.
There is no reason to think that the ice parameters are the same between neighboring glaciers. There is currently no “good” way to calibrate them. We won’t discuss the details here, but we provide a script to illustrate the sensitivity of the model to this choice.
Script¶
Here, we run the inversion algorithm for all glaciers in the Pyrenees.
# Python imports
import logging
# Libs
import geopandas as gpd
# Locals
import oggm.cfg as cfg
from oggm import utils, workflow, tasks
# For timing the run
import time
start = time.time()
# Module logger
log = logging.getLogger(__name__)
# Initialize OGGM and set up the default run parameters
cfg.initialize(logging_level='WORKFLOW')
rgi_version = '61'
rgi_region = '11' # Region Central Europe
# Local working directory (where OGGM will write its output)
WORKING_DIR = utils.gettempdir('OGGM_Inversion')
utils.mkdir(WORKING_DIR, reset=True)
cfg.PATHS['working_dir'] = WORKING_DIR
# RGI file
path = utils.get_rgi_region_file(rgi_region, version=rgi_version)
rgidf = gpd.read_file(path)
# Select the glaciers in the Pyrenees
rgidf = rgidf.loc[rgidf['O2Region'] == '2']
# Sort for more efficient parallel computing
rgidf = rgidf.sort_values('Area', ascending=False)
log.workflow('Starting OGGM inversion run')
log.workflow('Number of glaciers: {}'.format(len(rgidf)))
# Go - get the pre-processed glacier directories
# We start at level 3, because we need all data for the inversion
gdirs = workflow.init_glacier_regions(rgidf, from_prepro_level=3,
prepro_border=10)
# Default parameters
# Deformation: from Cuffey and Patterson 2010
glen_a = 2.4e-24
# Sliding: from Oerlemans 1997
fs = 5.7e-20
# Correction factors
factors = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
factors += [1.1, 1.2, 1.3, 1.5, 1.7, 2, 2.5, 3, 4, 5]
factors += [6, 7, 8, 9, 10]
# Run the inversions tasks with the given factors
for f in factors:
# Without sliding
suf = '_{:03d}_without_fs'.format(int(f * 10))
workflow.execute_entity_task(tasks.mass_conservation_inversion, gdirs,
glen_a=glen_a*f, fs=0)
workflow.execute_entity_task(tasks.filter_inversion_output, gdirs)
# Store the results of the inversion only
utils.compile_glacier_statistics(gdirs, filesuffix=suf,
inversion_only=True)
# With sliding
suf = '_{:03d}_with_fs'.format(int(f * 10))
workflow.execute_entity_task(tasks.mass_conservation_inversion, gdirs,
glen_a=glen_a*f, fs=fs)
workflow.execute_entity_task(tasks.filter_inversion_output, gdirs)
# Store the results of the inversion only
utils.compile_glacier_statistics(gdirs, filesuffix=suf,
inversion_only=True)
# Log
m, s = divmod(time.time() - start, 60)
h, m = divmod(m, 60)
log.workflow('OGGM is done! Time needed: %d:%02d:%02d' % (h, m, s))
If everything went well, you should see an output similar to:
2019-02-16 22:16:19: oggm.cfg: Using configuration file: /home/mowglie/Documents/git/oggm-fork/oggm/params.cfg
2019-02-16 22:16:20: __main__: Starting OGGM inversion run
2019-02-16 22:16:20: __main__: Number of glaciers: 35
2019-02-16 22:16:20: oggm.workflow: init_glacier_regions from prepro level 3 on 35 glaciers.
2019-02-16 22:16:20: oggm.workflow: Execute entity task gdir_from_prepro on 35 glaciers
2019-02-16 22:16:20: oggm.workflow: Multiprocessing: using all available processors (N=8)
2019-02-16 22:16:21: oggm.workflow: Execute entity task mass_conservation_inversion on 35 glaciers
2019-02-16 22:16:21: oggm.workflow: Execute entity task filter_inversion_output on 35 glaciers
(...)
2019-02-16 22:16:25: oggm.workflow: Execute entity task filter_inversion_output on 35 glaciers
2019-02-16 22:16:25: oggm.workflow: Execute entity task glacier_statistics on 35 glaciers
2019-02-16 22:16:25: __main__: OGGM is done! Time needed: 0:00:05
Analysing the output¶
The output directory contains the
glacier_statistics_*.csv
files, which compile the output of each
inversion experiment. Let’s run the analysis in a jupyter notebook,
it’s so much easier!
3. Deal with errors¶
OGGM will not be able to run on a number of glaciers. We summarized the reasons for this in our GMD publication and showed that the number of failing glaciers can be quite high depending on the region considered.
In this example, we run the model on a list of three glaciers: two of them will end with errors: one because it already failed at preprocessing (i.e. prior to this run), and one will fail during the run.
Script¶
The important part of the script is the set-up of the continue_on_error global parameter to True. This will allow OGGM to continue the run despite of some glaciers having errors, a useful feature for regional or global applications.
# Python imports
import logging
# Locals
import oggm.cfg as cfg
from oggm import utils, workflow, tasks
# For timing the run
import time
start = time.time()
# Module logger
log = logging.getLogger(__name__)
# Initialize OGGM and set up the default run parameters
cfg.initialize(logging_level='WORKFLOW')
# Here we override some of the default parameters
# How many grid points around the glacier?
# We make it small because we want the model to error because
# of flowing out of the domain
cfg.PARAMS['border'] = 80
# This is the important bit!
# We tell OGGM to continue despite of errors
cfg.PARAMS['continue_on_error'] = True
# Local working directory (where OGGM will write its output)
WORKING_DIR = utils.gettempdir('OGGM_Errors')
utils.mkdir(WORKING_DIR, reset=True)
cfg.PATHS['working_dir'] = WORKING_DIR
rgi_ids = ['RGI60-11.00897', 'RGI60-11.01450', 'RGI60-11.03295']
log.workflow('Starting OGGM run')
log.workflow('Number of glaciers: {}'.format(len(rgi_ids)))
# Go - get the pre-processed glacier directories
gdirs = workflow.init_glacier_regions(rgi_ids, from_prepro_level=4)
# We can step directly to the experiment!
# Random climate representative for the recent climate (1985-2015)
# with a negative bias added to the random temperature series
workflow.execute_entity_task(tasks.run_random_climate, gdirs,
nyears=150, seed=0,
temperature_bias=-1)
# Write the compiled output
utils.compile_glacier_statistics(gdirs)
utils.compile_run_output(gdirs)
# Log
m, s = divmod(time.time() - start, 60)
h, m = divmod(m, 60)
log.workflow('OGGM is done! Time needed: %d:%02d:%02d' % (h, m, s))
If everything went “well”, you should see an output similar to:
2019-02-21 15:51:39: oggm.cfg: Using configuration file: /home/mowglie/Documents/git/oggm-fork/oggm/params.cfg
2019-02-21 15:51:39: __main__: Starting OGGM run
2019-02-21 15:51:39: __main__: Number of glaciers: 3
2019-02-21 15:51:39: oggm.workflow: init_glacier_regions from prepro level 4 on 3 glaciers.
2019-02-21 15:51:39: oggm.workflow: Execute entity task gdir_from_prepro on 3 glaciers
2019-02-21 15:51:39: oggm.workflow: Multiprocessing: using all available processors (N=8)
2019-02-21 15:51:40: oggm.workflow: Execute entity task run_random_climate on 3 glaciers
2019-02-21 15:51:40: oggm.core.flowline: RuntimeError occurred during task run_random_climate on RGI60-11.03295: Need a valid `model_flowlines` file. If you explicitly want to use `inversion_flowlines`, set use_inversion_flowlines=True.
2019-02-21 15:52:11: oggm.core.flowline: RuntimeError occurred during task run_random_climate on RGI60-11.00897: Glacier exceeds domain boundaries.
2019-02-21 15:52:11: oggm.workflow: Execute entity task glacier_statistics on 3 glaciers
2019-02-21 15:52:11: __main__: OGGM is done! Time needed: 0:00:32
Dealing with errors¶
The simplest way to deal with errors is to look for NaNS (“not a number”) in the run_output*.nc files: glaciers with errors will have NaNs in place of their timeseries. For an in-depth analysis of the cause of these errors, OGGM delivers some tools to help you out. We have prepared two notebooks to show you their usage:
deal_with_errors.ipynb builds upon the example above to analyse the output
preprocessing_errors.ipynb reproduces the error analysis of our GMD paper
You can open these tutorials online by following this link:
4. Run with a spinup and GCM data¶
The initial state of glaciers play a large role for the model output. In this example we illustrate how to “spinup” the glaciers (e.g.: make them grow) before running the climate period. For this example we use data from the CESM Last Millennium Ensemble.
Script¶
# Python imports
import time
import logging
# Libs
import matplotlib.pyplot as plt
# Locals
import oggm.cfg as cfg
from oggm import tasks, utils, workflow
from oggm.workflow import execute_entity_task
from oggm.utils import get_demo_file
# Module logger
log = logging.getLogger(__name__)
# For timing the run
start = time.time()
# Initialize OGGM and set up the default run parameters
cfg.initialize()
# Local working directory (where OGGM will write its output)
WORKING_DIR = utils.gettempdir('OGGM_spinup_run')
utils.mkdir(WORKING_DIR, reset=True)
cfg.PATHS['working_dir'] = WORKING_DIR
# Use multiprocessing?
cfg.PARAMS['use_multiprocessing'] = True
# How many grid points around the glacier?
# Make it large if you expect your glaciers to grow large
cfg.PARAMS['border'] = 80
# Go - initialize glacier directories
gdirs = workflow.init_glacier_regions(['RGI60-11.00897'], from_prepro_level=4)
# Additional climate file (CESM)
cfg.PATHS['cesm_temp_file'] = get_demo_file('cesm.TREFHT.160001-200512'
'.selection.nc')
cfg.PATHS['cesm_precc_file'] = get_demo_file('cesm.PRECC.160001-200512'
'.selection.nc')
cfg.PATHS['cesm_precl_file'] = get_demo_file('cesm.PRECL.160001-200512'
'.selection.nc')
execute_entity_task(tasks.process_cesm_data, gdirs)
# Run the last 200 years with the default starting point (current glacier)
# and CESM data as input
execute_entity_task(tasks.run_from_climate_data, gdirs,
climate_filename='gcm_data',
ys=1801, ye=2000,
output_filesuffix='_no_spinup')
# Run the spinup simulation - t* climate with a cold temperature bias
execute_entity_task(tasks.run_constant_climate, gdirs,
nyears=100, bias=0, temperature_bias=-0.5,
output_filesuffix='_spinup')
# Run a past climate run based on this spinup
execute_entity_task(tasks.run_from_climate_data, gdirs,
climate_filename='gcm_data',
ys=1801, ye=2000,
init_model_filesuffix='_spinup',
output_filesuffix='_with_spinup')
# Compile output
log.info('Compiling output')
utils.compile_glacier_statistics(gdirs)
ds1 = utils.compile_run_output(gdirs, filesuffix='_no_spinup')
ds2 = utils.compile_run_output(gdirs, filesuffix='_with_spinup')
# Log
m, s = divmod(time.time() - start, 60)
h, m = divmod(m, 60)
log.info('OGGM is done! Time needed: %d:%02d:%02d' % (h, m, s))
# Plot
f, ax = plt.subplots(figsize=(9, 4))
(ds1.volume.sum(dim='rgi_id') * 1e-9).plot(ax=ax, label='No spinup')
(ds2.volume.sum(dim='rgi_id') * 1e-9).plot(ax=ax, label='With spinup')
ax.set_ylabel('Volume (km$^3$)')
ax.set_xlabel('')
ax.set_title('Hintereisferner volume under CESM forcing')
plt.legend()
plt.tight_layout()
plt.show()
The output should look like this:
5. Run the mass-balance calibration¶
Sometimes you will need to do the mass-balance calibration yourself. For example if you use alternate climate data, or if you change the global parameters of the model (such as precipitation scaling factor or melting threshold, for example). Here we show how to run the calibration for all available reference glaciers. We also use this example to illustrate how to do a “full run”, i.e. without relying on Pre-processed directories.
The output of this script is the ref_tstars.csv
file, which is found in
the working directory. The ref_tstars.csv
file can then be used for further
runs, simply by copying it in the corresponding working directory before the
run.
Script¶
# Python imports
import json
import os
# Libs
import numpy as np
# Locals
import oggm
from oggm import cfg, utils, tasks, workflow
from oggm.workflow import execute_entity_task
from oggm.core.massbalance import (ConstantMassBalance, PastMassBalance,
MultipleFlowlineMassBalance)
# Module logger
import logging
log = logging.getLogger(__name__)
# RGI Version
rgi_version = '61'
# CRU or HISTALP?
baseline = 'CRU'
# Initialize OGGM and set up the run parameters
cfg.initialize()
# Local paths (where to write the OGGM run output)
dirname = 'OGGM_ref_mb_{}_RGIV{}_OGGM{}'.format(baseline, rgi_version,
oggm.__version__)
WORKING_DIR = utils.gettempdir(dirname, home=True)
utils.mkdir(WORKING_DIR, reset=True)
cfg.PATHS['working_dir'] = WORKING_DIR
# We are running the calibration ourselves
cfg.PARAMS['run_mb_calibration'] = True
# We are using which baseline data?
cfg.PARAMS['baseline_climate'] = baseline
# Use multiprocessing?
cfg.PARAMS['use_multiprocessing'] = True
# Set to True for operational runs - here we want all glaciers to run
cfg.PARAMS['continue_on_error'] = False
if baseline == 'HISTALP':
# Other params: see https://oggm.org/2018/08/10/histalp-parameters/
cfg.PARAMS['baseline_y0'] = 1850
cfg.PARAMS['prcp_scaling_factor'] = 1.75
cfg.PARAMS['temp_melt'] = -1.75
# Get the reference glacier ids (they are different for each RGI version)
rgi_dir = utils.get_rgi_dir(version=rgi_version)
df, _ = utils.get_wgms_files()
rids = df['RGI{}0_ID'.format(rgi_version[0])]
# We can't do Antarctica
rids = [rid for rid in rids if not ('-19.' in rid)]
# For HISTALP only RGI reg 11
if baseline == 'HISTALP':
rids = [rid for rid in rids if '-11.' in rid]
# Make a new dataframe with those (this takes a while)
log.info('Reading the RGI shapefiles...')
rgidf = utils.get_rgi_glacier_entities(rids, version=rgi_version)
log.info('For RGIV{} we have {} candidate reference '
'glaciers.'.format(rgi_version, len(rgidf)))
# We have to check which of them actually have enough mb data.
# Let OGGM do it:
gdirs = workflow.init_glacier_regions(rgidf)
# We need to know which period we have data for
log.info('Process the climate data...')
if baseline == 'CRU':
execute_entity_task(tasks.process_cru_data, gdirs, print_log=False)
elif baseline == 'HISTALP':
cfg.PARAMS['continue_on_error'] = True # Some glaciers are not in Alps
execute_entity_task(tasks.process_histalp_data, gdirs, print_log=False)
cfg.PARAMS['continue_on_error'] = False
gdirs = utils.get_ref_mb_glaciers(gdirs)
# Keep only these
rgidf = rgidf.loc[rgidf.RGIId.isin([g.rgi_id for g in gdirs])]
# Save
log.info('For RGIV{} and {} we have {} reference glaciers.'.format(rgi_version,
baseline,
len(rgidf)))
rgidf.to_file(os.path.join(WORKING_DIR, 'mb_ref_glaciers.shp'))
# Sort for more efficient parallel computing
rgidf = rgidf.sort_values('Area', ascending=False)
# Go - initialize glacier directories
gdirs = workflow.init_glacier_regions(rgidf)
# Prepro tasks
task_list = [
tasks.glacier_masks,
tasks.compute_centerlines,
tasks.initialize_flowlines,
tasks.catchment_area,
tasks.catchment_intersections,
tasks.catchment_width_geom,
tasks.catchment_width_correction,
]
for task in task_list:
execute_entity_task(task, gdirs)
# Climate tasks
tasks.compute_ref_t_stars(gdirs)
execute_entity_task(tasks.local_t_star, gdirs)
execute_entity_task(tasks.mu_star_calibration, gdirs)
# We store the associated params
mb_calib = gdirs[0].read_pickle('climate_info')['mb_calib_params']
with open(os.path.join(WORKING_DIR, 'mb_calib_params.json'), 'w') as fp:
json.dump(mb_calib, fp)
# And also some statistics
utils.compile_glacier_statistics(gdirs)
# Tests: for all glaciers, the mass-balance around tstar and the
# bias with observation should be approx 0
for gd in gdirs:
mb_mod = MultipleFlowlineMassBalance(gd,
mb_model_class=ConstantMassBalance,
use_inversion_flowlines=True,
bias=0) # bias=0 because of calib!
mb = mb_mod.get_specific_mb()
np.testing.assert_allclose(mb, 0, atol=5) # atol for numerical errors
mb_mod = MultipleFlowlineMassBalance(gd, mb_model_class=PastMassBalance,
use_inversion_flowlines=True)
refmb = gd.get_ref_mb_data().copy()
refmb['OGGM'] = mb_mod.get_specific_mb(year=refmb.index)
np.testing.assert_allclose(refmb.OGGM.mean(), refmb.ANNUAL_BALANCE.mean(),
atol=5) # atol for numerical errors
# Log
log.info('Calibration is done!')
Cross-validation¶
The results of the cross-validation are found in the crossval_tstars.csv
file. Let’s replicate Figure 3 in Marzeion et al., (2012) :
# Python imports
import os
# Libs
import numpy as np
import pandas as pd
import geopandas as gpd
# Locals
import oggm
from oggm import cfg, workflow, tasks, utils
from oggm.core.massbalance import PastMassBalance, MultipleFlowlineMassBalance
import matplotlib.pyplot as plt
# RGI Version
rgi_version = '61'
# CRU or HISTALP?
baseline = 'CRU'
# Initialize OGGM and set up the run parameters
cfg.initialize()
if baseline == 'HISTALP':
# Other params: see https://oggm.org/2018/08/10/histalp-parameters/
cfg.PARAMS['prcp_scaling_factor'] = 1.75
cfg.PARAMS['temp_melt'] = -1.75
# Local paths (where to find the OGGM run output)
dirname = 'OGGM_ref_mb_{}_RGIV{}_OGGM{}'.format(baseline, rgi_version,
oggm.__version__)
WORKING_DIR = utils.gettempdir(dirname, home=True)
cfg.PATHS['working_dir'] = WORKING_DIR
# Read the rgi file
rgidf = gpd.read_file(os.path.join(WORKING_DIR, 'mb_ref_glaciers.shp'))
# Go - initialize glacier directories
gdirs = workflow.init_glacier_regions(rgidf)
# Cross-validation
file = os.path.join(cfg.PATHS['working_dir'], 'ref_tstars.csv')
ref_df = pd.read_csv(file, index_col=0)
for i, gdir in enumerate(gdirs):
print('Cross-validation iteration {} of {}'.format(i + 1, len(ref_df)))
# Now recalibrate the model blindly
tmp_ref_df = ref_df.loc[ref_df.index != gdir.rgi_id]
tasks.local_t_star(gdir, ref_df=tmp_ref_df)
tasks.mu_star_calibration(gdir)
# Mass-balance model with cross-validated parameters instead
mb_mod = MultipleFlowlineMassBalance(gdir, mb_model_class=PastMassBalance,
use_inversion_flowlines=True)
# Mass-balance timeseries, observed and simulated
refmb = gdir.get_ref_mb_data().copy()
refmb['OGGM'] = mb_mod.get_specific_mb(year=refmb.index)
# Compare their standard deviation
std_ref = refmb.ANNUAL_BALANCE.std()
rcor = np.corrcoef(refmb.OGGM, refmb.ANNUAL_BALANCE)[0, 1]
if std_ref == 0:
# I think that such a thing happens with some geodetic values
std_ref = refmb.OGGM.std()
rcor = 1
# Store the scores
ref_df.loc[gdir.rgi_id, 'CV_MB_BIAS'] = (refmb.OGGM.mean() -
refmb.ANNUAL_BALANCE.mean())
ref_df.loc[gdir.rgi_id, 'CV_MB_SIGMA_BIAS'] = (refmb.OGGM.std() / std_ref)
ref_df.loc[gdir.rgi_id, 'CV_MB_COR'] = rcor
# Write out
ref_df.to_csv(os.path.join(cfg.PATHS['working_dir'], 'crossval_tstars.csv'))
# Marzeion et al Figure 3
f, ax = plt.subplots(1, 1)
bins = np.arange(20) * 400 - 3800
ref_df['CV_MB_BIAS'].plot(ax=ax, kind='hist', bins=bins, color='C3', label='')
ax.vlines(ref_df['CV_MB_BIAS'].mean(), 0, 120, linestyles='--', label='Mean')
ax.vlines(ref_df['CV_MB_BIAS'].quantile(), 0, 120, label='Median')
ax.vlines(ref_df['CV_MB_BIAS'].quantile([0.05, 0.95]), 0, 120, color='grey',
label='5% and 95%\npercentiles')
ax.text(0.01, 0.99, 'N = {}'.format(len(gdirs)),
horizontalalignment='left',
verticalalignment='top',
transform=ax.transAxes)
ax.set_ylim(0, 120)
ax.set_ylabel('N Glaciers')
ax.set_xlabel('Mass-balance error (mm w.e. yr$^{-1}$)')
ax.legend(loc='best')
plt.tight_layout()
plt.show()
scores = 'Median bias: {:.2f}\n'.format(ref_df['CV_MB_BIAS'].median())
scores += 'Mean bias: {:.2f}\n'.format(ref_df['CV_MB_BIAS'].mean())
scores += 'RMS: {:.2f}\n'.format(np.sqrt(np.mean(ref_df['CV_MB_BIAS']**2)))
scores += 'Sigma bias: {:.2f}\n'.format(np.mean(ref_df['CV_MB_SIGMA_BIAS']))
# Output
print(scores)
fn = os.path.join(WORKING_DIR, 'scores.txt')
with open(fn, 'w') as f:
f.write(scores)
This should generate an output similar to:
Median bias: 18.78
Mean bias: 15.34
RMS: 518.83
Sigma bias: 0.87

Error (bias) distribution of the mass-balance computed with a leave-one-out cross-validation.¶
API Reference¶
This page lists all available functions and classes in OGGM. It is a hard work to keep everything up-to-date, so don’t hesitate to let us know (see Get in touch) if something’s missing, or help us (see Contributing to OGGM) to write a better documentation!
Workflow¶
Tools to set-up and run OGGM.
Read the configuration file containing the run’s parameters. |
|
Set the global logger parameters. |
|
Set the glacier intersection database for OGGM to use. |
|
Deletes the content of the working directory. |
|
Initializes the list of Glacier Directories for this run. |
|
Execute a task on gdirs. |
|
Shortcut function: run all flowline preprocessing tasks. |
|
Shortcut function: run all climate related tasks. |
|
Shortcut function: run all ice thickness inversion tasks. |
|
Gather as much statistics as possible about a list of glaciers. |
|
Merge the output of the model runs of several gdirs into one file. |
|
Merge the climate input files in the glacier directories into one file. |
|
Gathers the log output for the selected task(s) |
|
Copies the glacier directories and their content to a new location. |
Troubleshooting¶
Prints the OGGM version and other system information. |
Input/Output¶
Returns the path to the desired OGGM-sample-file. |
|
Path to the RGI directory. |
|
Path to the RGI region file. |
|
Get a list of glacier outlines selected from their RGI IDs. |
|
Path to the RGI directory containing the intersect files. |
|
Path to the RGI regional intersect file. |
|
Get a list of glacier intersects selected from their RGI IDs. |
|
Returns a path to the desired CRU baseline climate file. |
|
Returns a path to the desired HISTALP baseline climate file. |
|
Get the list of glaciers we have valid mass balance measurements for. |
Entity tasks¶
Entity tasks are tasks which are applied on single glaciers individually and do not require information from other glaciers (this encompasses the majority of OGGM’s tasks). They are parallelizable.
Very first task: define the glacier’s local grid. |
|
Makes a gridded mask of the glacier outlines and topography. |
|
Compute the centerlines following Kienholz et al., (2014). |
|
Computes more physical Inversion Flowlines from geometrical Centerlines |
|
Computes the Flowline along the unglaciated downstream topography |
|
The bedshape obtained by fitting a parabola to the line’s normals. |
|
Compute the catchment areas of each tributary line. |
|
Computes the intersections between the catchments. |
|
Compute geometrical catchment widths for each point of the flowlines. |
|
Corrects for NaNs and inconsistencies in the geometrical widths. |
|
Processes and writes the CRU baseline climate data for this glacier. |
|
Processes and writes the HISTALP baseline climate data for this glacier. |
|
Processes and writes the climate data from a user-defined climate file. |
|
Applies the anomaly method to GCM climate data |
|
Processes and writes CESM climate data for this glacier. |
|
Read, process and store the CMIP5 climate data data for this glacier. |
|
Compute the local t* and associated glacier-wide mu*. |
|
Compute the flowlines’ mu* and the associated apparent mass-balance. |
|
Compute apparent mb from a linear mass-balance assumption (for testing). |
|
Computes the mu candidates, glacier wide. |
|
Prepares the data needed for the inversion. |
|
Compute the glacier thickness along the flowlines |
|
Filters the last few grid point whilst conserving total volume. |
|
Compute a thickness map by redistributing mass along altitudinal bands. |
|
Compute a thickness map by interpolating between centerlines and border. |
|
Merges data from preprocessing tasks. |
|
Runs the random mass-balance model for a given number of years. |
|
Runs the constant mass-balance model for a given number of years. |
|
Runs a glacier with climate input from e.g. |
Global tasks¶
Global tasks are tasks which are run on a set of glaciers (most often: all glaciers in the current run). They are not parallelizable at the user level but might use multiprocessing internally.
Detects the best t* for the reference glaciers and writes them to disk |
|
Gather as much statistics as possible about a list of glaciers. |
|
Merge the output of the model runs of several gdirs into one file. |
|
Merge the climate input files in the glacier directories into one file. |
|
Gathers the log output for the selected task(s) |
Classes¶
Listed here are the classes which are relevant at the API level (i.e. classes which are used and re-used across modules and tasks).
Organizes read and write access to the glacier’s files. |
|
A Centerline has geometrical and flow rooting properties. |
|
The is the input flowline for the model. |
Glacier directories¶
Glacier directories (see also: GlacierDirectory
) are folders
on disk which store the input and output data for a single glacier during
an OGGM run. The data are on disk to be persistent, i.e. they won’t be deleted
unless you ask OGGM to. You can start a run from an existing directory,
avoiding to re-do unnecessary computations.
Initialising a glacier directory¶
The easiest way to initialize a glacier directory is to start from a pre-processed state available on the OGGM servers:
In [1]: import os
In [2]: import oggm
In [3]: from oggm import cfg, workflow
In [4]: from oggm.utils import gettempdir
In [5]: cfg.initialize() # always initialize before an OGGM task
# The working directory is where OGGM will store the run's data
In [6]: cfg.PATHS['working_dir'] = os.path.join(gettempdir(), 'Docs_GlacierDir')
In [7]: gdirs = workflow.init_glacier_regions('RGI60-11.00897', from_prepro_level=1,
...: prepro_border=10)
...:
In [8]: gdir = gdirs[0] # init_glacier_regions always returns a list
We just downloaded the minimal input for a glacier directory. The
GlacierDirectory
object contains the glacier metadata:
In [9]: gdir
Out[9]:
<oggm.GlacierDirectory>
RGI id: RGI60-11.00897
Region: 11: Central Europe
Subregion: 11-01: Alps
Name: Hintereisferner
Glacier type: Glacier
Terminus type: Land-terminating
Area: 8.036 km2
Lon, Lat: (10.7584, 46.8003)
Grid (nx, ny): (139, 94)
Grid (dx, dy): (50.0, -50.0)
In [10]: gdir.rgi_id
Out[10]: 'RGI60-11.00897'
It also points to a specific directory in disk, where the files are written to:
In [11]: gdir.dir
Out[11]: '/tmp/OGGM/Docs_GlacierDir/per_glacier/RGI60-11/RGI60-11.00/RGI60-11.00897'
In [12]: os.listdir(gdir.dir)
Out[12]:
['outlines.tar.gz.properties',
'dem_source.txt',
'diagnostics.json',
'log.txt',
'outlines.tar.gz',
'glacier_grid.json',
'intersects.tar.gz',
'dem.tif']
Users usually don’t have to care about where the data is located.
GlacierDirectory
objects help you to get this information:
In [13]: fdem = gdir.get_filepath('dem')
In [14]: fdem
Out[14]: '/tmp/OGGM/Docs_GlacierDir/per_glacier/RGI60-11/RGI60-11.00/RGI60-11.00897/dem.tif'
In [15]: import xarray as xr
In [16]: xr.open_rasterio(fdem).plot(cmap='terrain')
Out[16]: <matplotlib.collections.QuadMesh at 0x7f85966bbb00>

This persistence on disk allows for example to continue a workflow that has been previously interrupted. Initialising a GlacierDirectory from a non-empty folder won’t erase its content:
In [17]: gdir = oggm.GlacierDirectory('RGI60-11.00897')
In [18]: os.listdir(gdir.dir) # the directory still contains the data
Out[18]:
['outlines.tar.gz.properties',
'dem_source.txt',
'diagnostics.json',
'log.txt',
'outlines.tar.gz',
'glacier_grid.json',
'intersects.tar.gz',
'dem.tif']
cfg.BASENAMES¶
This is a list of the files that can be found in the glacier directory or
its divides. These data files and their names are standardized and listed in the
oggm.cfg
module. If you want to implement your own task you’ll
have to add an entry to this file too.
- calving_output.pkl
Calving output
- catchments_intersects.shp
The intersections between cathments (shapefile) in the local map projection (Transverse Mercator).
- centerlines.pkl
A list of
oggm.Centerline
instances, sorted by flow order.- climate_info.pkl
Some information (dictionary) about the climate data and the mass balance parameters for this glacier.
- climate_monthly.nc
The monthly climate timeseries stored in a netCDF file.
- dem.tif
A geotiff file containing the DEM (reprojected into the local grid).This DEM is not smoothed or gap filles, and is the closest to the original DEM source.
- dem_source.txt
A text file with the source of the topo file (GIMP, SRTM, …).
- diagnostics.json
A dictionary containing runtime diagnostics useful for debugging.
- downstream_line.pkl
A dictionary containing the downsteam line geometry as well as the bed shape computed from a parabolic fit.
- flowline_catchments.shp
Each flowline has a catchment area computed from flow routing algorithms: this shapefile stores the catchment outlines (in the local map projection (Transverse Mercator).
- gcm_data.nc
The monthly GCM climate timeseries stored in a netCDF file.
- geometries.pkl
A dictionary containing the shapely.Polygons of a glacier. The “polygon_hr” entry contains the geometry transformed to the local grid in (i, j) coordinates, while the “polygon_pix” entry contains the geometries transformed into the coarse grid (the i, j elements are integers). The “polygon_area” entry contains the area of the polygon as computed by Shapely. The “catchment_indices” entrycontains a list of len n_centerlines, each element containing a numpy array of the indices in the glacier grid which represent the centerlines catchment area.
- glacier_grid.json
A
salem.Grid
handling the georeferencing of the local grid.- gridded_data.nc
A netcdf file containing several gridded data variables such as topography, the glacier masks, the interpolated 2D glacier bed, and more.
- hypsometry.csv
A hypsometry file computed by OGGM and provided in the same format as the RGI (useful for diagnostics).
- intersects.shp
The glacier intersects in the local map projection (Transverse Mercator).
- inversion_flowlines.pkl
A “better” version of the Centerlines, now on a regular spacing i.e., not on the gridded (i, j) indices. The tails of the tributaries are cut out to make more realistic junctions. They are now “1.5D” i.e., with a width.
- inversion_input.pkl
List of dicts containing the data needed for the inversion.
- inversion_output.pkl
List of dicts containing the output data from the inversion.
- inversion_params.pkl
Dict of fs and fd as computed by the inversion optimisation.
- linear_mb_params.pkl
When using a linear mass-balance for the inversion, this dict stores the optimal ela_h and grad.
- local_mustar.json
A dict containing the glacier’s t*, bias, and the flowlines’ mu*
- model_diagnostics.nc
A netcdf file containing the model diagnostics (volume, mass-balance, length…).
- model_flowlines.pkl
List of flowlines ready to be run by the model.
- model_run.nc
A netcdf file containing enough information to reconstruct the entire flowline glacier along the run (can be data expensive).
- outlines.shp
The glacier outlines in the local map projection (Transverse Mercator).
Mass-balance¶
Mass-balance models found in the core.massbalance
module.
They follow the MassBalanceModel()
interface. Here is a quick summary
of the units and conventions used by all models:
Units¶
The computed mass-balance is in units of [m ice s-1] (“meters of ice per
second”), unless otherwise specified (e.g. for the utility function
get_specific_mb
).
The conversion from the climatic mass-balance ([kg m-2 s-1] ) therefore assumes
an ice density given by cfg.PARAMS['ice_density']
(currently: 900 kg m-3).
Time¶
The time system used by OGGM is a simple “fraction of year” system, where the floating year can be used for conversion to months and years:
In [19]: from oggm.utils import floatyear_to_date, date_to_floatyear
In [20]: date_to_floatyear(1982, 12)
Out[20]: 1982.9166666666667
In [21]: floatyear_to_date(1.2)
Out[21]: (1, 3)
Interface¶
Common logic for the mass balance models. |
|
Monthly mass-balance at given altitude(s) for a moment in time. |
|
Like self.get_monthly_mb(), but for annual MB. |
|
Specific mb for this year and a specific glacier geometry. |
|
Compute the equilibrium line altitude for this year |
Models¶
Constant mass-balance as a linear function of altitude. |
|
Mass balance during the climate data period. |
|
Constant mass-balance during a chosen period. |
|
Random shuffle of all MB years within a given time period. |
|
Adding uncertainty to a mass balance model. |
|
Handle mass-balance at the glacier level instead of flowline level. |
FAQ and Troubleshooting¶
We list here some of the questions we get most often, either on the OGGM Users mailing list or on github.
General questions¶
What is the difference between OGGM and other glacier models?¶
There are plenty of established ice dynamics models, and some of them are open-source (e.g. PISM, Elmer/Ice).
The purpose of OGGM is to be an easy to use, fully automated global glacier model, i.e. applicable to any glacier in the world without specific tuning or tweaking. Therefore, it does not attempt to replace (and even less compete with) these established ice dynamics models: it can be seen as a “framework”, a set of unified tools with eases the process of working with many mountain glaciers at once.
There is a standard modelling chain in OGGM (with a mass-balance model and a multiple flowline model) but there is no obligation to use all of these tools. For example, we can easily picture a workflow where people will use OGGM to create homogenized initial conditions (topography, climate) but use a higher order dynamical model like PISM instead of the simplified OGGM dynamics. For these kind of workflows, we created the OGGM-contrib example package which should help OGGM users to implement their own physics in OGGM.
Can I use OGGM to simulate long term glacier evolution?¶
It depends what you mean by “long-term”: at centenial time scales, yes. At millenial time scales, maybe. At glacial time scales, probably not. The major issue we have to face with OGGM is that it uses a “glacier-centric” approach: it can simulate the mountain glaciers and ice-caps we know from contemporary inventories, but it cannot simulate glaciers which existed before but have disappeared until today. Also, if glaciers grow into large ice complexes and ice caps, the flowline assumption becomes much less valid than for typical valley glaciers found today. For these situations, fully distributed models like PISM are more appropriate.
Troubleshooting¶
Some glaciers exit with errors. What should I do?¶
Many things can go wrong when simulating all the world glaciers with a single model. We’ve tried our best, but still many glaciers cannot be simulated automatically. Possible reasons include complex glacier geometries that cannot be simulated by flowlines, very cold climates which don’t allow melting to occur, or numerical instabilities during the simulation. Altogether, 4218 glaciers (3.6% of the total area worldwide) could not be modelled by OGGM in the standard global simulations. Some regions experience more errors than others (see the paper).
When you experience errors, you have to decide if they are due to an error in your code or a problem in OGGM itself. The number and type of errors might help you out to decide if you want to act and troubleshoot them (see below). Also, always keep in mind that the number of errors is less important than the glacier area they represent. Plenty or errors on small glaciers is not as bad as one large glacier missing.
Then, you have to carefully consider how to deal with missing glaciers. Most studies will upscale diagnostic quantities using power laws or interpolation: for example, use volume-area-scaling to compute the volume of glaciers that are missing after an OGGM run. Importantly, you have to always be aware that these quantities will be missing from the compiled run outputs, and should be accounted for in quantitative analyses.
What does the “Glacier exceeds domain boundaries” error mean?¶
This happens when a glacier grows larger than the original map boundaries. We recommend to increase the glacier map in this case, by setting cfg.PARAMS[‘border’] to a larger value, e.g. 100 or 200. The larger this value, the larger the glacier can grow (the drawback is that simulations become slowier and hungrier in memory because the number of grid points increases as well). We do not recommend to go larger than 250, however: for these cases it is likely that something else is wrong in your workflow or OGGM itself.
What does the “NaN in numerical solution” error mean?¶
This happens when the ice dynamics simulation is unstable. In OGGM we use an adaptive time stepping scheme (which should avoid these kind of situations), but we also implemented thresholds for small time steps: i.e. if a simulation requires very small time steps we still use a larger one to avoid extremely slow runs. These thresholds are “bad practice” but required for operational reasons: when this happens, it is likely that the simulations blow up with a numerical error. There is not much you can do here, unless maybe set your own thresholds for small time steps (at the cost of computation time).
Version history¶
v1.1 (28.02.2019)¶
This is a major new release of the OGGM model, with substantial improvements to version 1. We recommend to use this version from now on. It coincides with the publication of our publication in Geoscientific Model Development.
New contributors to the project:
Matthias Dusch (PhD student, University of Innsbruck), added extensive cross-validation tools and an associated website.
Philipp Gregor (Master student, University of Innsbruck), added options to switch on lateral bed stress in the flowline ice dynamics
Nicolas Champollion (PostDoc, University of Bremen), added GCM data IO routines.
Sadie Bartholomew (Software Engineer, UK Met Office), added ability to replace colormaps in graphics with HCL-based colors using python-colorspace.
Breaking changes¶
The utils.copy_to_basedir() function is changed to being an entity task. In addition gcm_data files, when present, will from now on always be copied when using this task (GH467 & PR468). By Anouk Vlug.
Accumulation Area Ratio (AAR) is now correctly computed (GH361). By Fabien Maussion.
The method used to apply CRU and GCM anomalies to the climatology has changed for precipitation: we now use scaled anomalies instead of the standard anomalies (PR393). The previous method might have lead to negative values in some cases. The corresponding reference t* have also been updated (PR407). This change has some consequences on the the model results: cross-validation indicates very similar scores, but the influence on global model output has not been assessed yet. By Fabien Maussion.
It is now possible to run a simulation with spinup in the standard workflow (PR411). For this to happen it was necessary to clean up the many *filesuffix options. The new names are more explicit but not backwards compatible. The previous filesuffix is now called output_filesuffix. The previous input_filesuffix is now called climate_input_filesuffix. The random_glacier_evolution task is now called run_random_climate for consistency with the other tasks See the PR linked above for more info. By Fabien Maussion.
RGI version 4 isn’t supported anymore (GH142). By Fabien Maussion.
Rework of the 2d interpolation tasks for ice thickness in the context of ITMIX2. The new interpolation are better, but not backwards compatible. Aside of me I don’t think anybody was using them (PR465). By Fabien Maussion.
Diagnostic variables (length, area, volume, ELA) are now stored at annual steps instead of montly steps (PR488). The old behavior can still be used with the
store_monthly_step
kwarg. Most users should not notice this change because the regionally compiled files were stored at yearly steps anyways. By Fabien Maussion.The list of reference t* dates is now generated differently: instead of the complex (and sort of useless) nearest neighbor algorithm we are now referring back to the original method of Marzeion et al. (2012). This comes together with other breaking changes, altogether likely to change the results of the mass-balance model for some glaciers. For more details see the PR: PR509 By Fabien Maussion.
The ice dynamics parameters (Glen A, N, ice density) are now “real” parameters accessible via
cfg.PARAMS
(PR520, GH511 and GH27). Previously, they were also accessible via a module attribute incfg
, which was more confusing than helping. Deprecated and removed a couple of other things on the go, such as the dangerous `optimize_inversion_params
task (this cannot be optimized yet) and the uselessvolume_inversion
wrapper (now calledmass_conservation_inversion
) By Fabien Maussion.The temperature sensitivity mu* is now flowline specific, instead of glacier wide. This change was necessary because it now allows low-lying tributaries to exist, despite of too high glacier wide mu*. This change had some wider reaching consequences in the code base and in the mass-balance models in particular: PR539. This will also allow to merge neighboring glaciers in the future. By Fabien Maussion.
The “human readable” mu* information is now stored in a JSON dict instead of a csv: PR568. By Fabien Maussion.
The global task glacier_characteristics has been renamed to compile_glacier_statistics (PR571). By Fabien Maussion.
The
process_cesm_data
task has been been moved to gcm_climate.py adressing: GH469 & PR582. By Anouk Vlug.The shapefiles are now stored in the glacier directories as compressed tar files, adressing GH367 & GH615. This option can be turned off with cfg.PARAMS[‘use_tar_shapefiles’] = False. By Fabien Maussion.
Enhancements¶
Added a utility function to easily get intersects files (PR402). By Fabien Maussion.
The old GlaThiDa file linking the total volume of glaciers (T database) to RGI has been updated to RGI Version 6 (PR403). Generally, we do not recommend to use these data for calibration or validation because of largely unknown uncertainties. By Fabien Maussion.
The computing efficiency of the 2D shallow ice model has been increased by a factor 2 (PR415), by avoiding useless repetitions of indexing operations. The results shouldn’t change at all. By Fabien Maussion.
Added optional shape factors for mass-conservation inversion and FluxBasedModel to account for lateral drag dependent on the bed shape (PR429). Accepted settings for shape factors are None, ‘Adhikari’ (Adhikari & Marshall 2012), ‘Nye’ (Nye, 1965; equivalent to Adhikari) and ‘Huss’ (Huss & Farinotti 2012). Thorough tests with applied shape factors are still missing. By Philipp Gregor.
Some amelioration to the mass-balance models (PR434). Added a
repeat
kwarg to thePastMassBalance
in order to loop over a selected period. Added anUncertainMassBalance
model which wraps an existing model and adds random uncertainty to it. By Fabien Maussion.The DEM sources are now clearly stated in each glacier directory, along with the original data citation (PR441). We encourage to always cite the original data providers. By Fabien Maussion.
Added some diagnostic tools which make it easier to detect dubious glacier outlines or DEM errors (PR445). This will be used to report to the RGI authors. By Fabien Maussion.
Added a new parameter (
PARAMS['use_rgi_area']
), which specifies whether OGGM should use the reference area provided by RGI or the one computed from the local map and reprojected outlines (PR458, default: True). By Matthias Dusch.A new
simple_glacier_masks
tasks allows to compute glacier rasters in a more robust way than the default OGGM method (PR476). This is useful for simpler workflows or to compute global statistics for external tools like rgitools. This task also computes hypsometry files much like RGI does. By Fabien Maussion.Reference glaciers now have mass-balance profiles attached to them, if available. You can get the profiles with
gdir.get_ref_mb_profile()
(PR493). By Fabien Maussion.New
process_histalp_data
taks to run OGGM with HISTALP data automatically. The task comes with a list of predefined t* like CRU and with different default parameters (see blog). The PR also adds some safety checks at the calibration and computation of the mass-balance to make sure there is no misused parameters (PR493). By Fabien Maussion.The
process_cesm_data
function has been split into two functions, to make it easier to run oggm with the climate of other GCM’s:process_cesm_data
reads the CESM files and handles the CESM specific file logic.process_gcm_data
is the general task able to handle all kind of data.process_cesm_data
can also be used as an example when you plan make a function for running OGGM with another GCM (GH469 & PR582). Anouk Vlug.New
process_dummy_cru_file
task to run OGGM with randomized CRU data (PR603). By Fabien Maussion.Colormaps in some graphics are replaced with Hue-Chroma-Luminance (HCL) based improvements when python-colorspace is (optionally) installed (PR587). By Sadie Bartholomew.
Added a workflow
merge_glacier_tasks
which merges tributary/surrounding glaciers to a main glacier, allowing mass exchange between them. This is helpfull/neccessary/intended for growing glacier experiments (e.g. paleoglaciology) (PR624). By Matthias Dusch.New
oggm_prepro
command line tool to run the OGGM preprocessing tasks and compress the directories (PR648). By Fabien Maussion.init_glacier_regions task now accepts RGI Ids strongs as input instead of only Geodataframes previously (PR656). By Fabien Maussion.
The
entity_task
decorator now accepts a fallback-function which will be executed if a task fails and cfg.PARAMS[‘continue_on_error’] = True. So far only one fallback function is implemented for climate.local_t_star (PR663). By Matthias Dusch.New process_gcm_data task to handle CMIP5 files. By Nicolas Champollion.
Bug fixes¶
Remove dependency to deprecated matplotlib._cntr module (GH418). By Fabien Maussion.
Fixed a bug in tidewater glaciers terminus position finding, where in some rare cases the percentile threshold was too low (PR444). By Fabien Maussion.
Fixed a caching bug in the test suite, where some tests used to fail when run for a second time on a modified gdir (PR448). By Fabien Maussion.
Fixed a problem with netCDF4 versions > 1.3 which returns masked arrays per default. We now prevent netCDF4 to return masked arrays altogether (GH482). By Fabien Maussion.
Internals¶
We now use a dedicated server for input data such as modified RGI files (PR408). By Fabien Maussion.
Test fix for googlemaps. By Fabien Maussion.
Added a utility function (
idealized_gdir()
) useful to dow flowline experiments without have to create a local map (PR413). By Fabien Maussion.
v1.0 (16 January 2018)¶
This is the first official major release of OGGM. It is concomitant to the submission of a manuscript to Geoscientific Model Development (GMD).
This marks the stabilization of the codebase (hopefully) and implies that future changes will have to be documented carefully to ensure traceability.
New contributors to the project:
Anouk Vlug (PhD student, University of Bremen), added the CESM climate data tools.
Anton Butenko (PhD student, University of Bremen), developed the downstream bedshape algorithm
Beatriz Recinos (PhD student, University of Bremen), participated to the development of the calving parametrization
Julia Eis (PhD student, University of Bremen), developed the glacier partitioning algorithm
Schmitty Smith (PhD student, Northand College, Wisconsin US), added optional parameters to the mass-balance models
v0.1.1 (16 February 2017)¶
Minor release: changes in ITMIX to handle the synthetic glacier cases.
It was tagged only recently for long term documentation purposes and storage on Zenodo.
v0.1 (29 March 2016)¶
Initial release, used to prepare the data submitted to ITMIX (see here).
This release is the result of several months of development (outside of GitHub for a large part). Several people have contributed to this release:
Michael Adamer (intern, UIBK), participated to the development of the centerline determination algorithm (2014)
Kévin Fourteau (intern, UIBK, ENS Cachan), participated to the development of the inversion and the flowline modelling algorithms (2014-2015)
Alexander H. Jarosch (Associate Professor, University of Iceland), developed the MUSCL-SuperBee model (PR23)
Johannes Landmann (intern, UIBK), participated to the links between databases project (2015)
Ben Marzeion (project leader, University of Bremen)
Fabien Maussion (project leader, UIBK)
Felix Oesterle (Post-Doc, UIBK), develops OGGR and provided the AWS deployment script (PR25)
Timo Rothenpieler (programmer, University of Bremen), participated to the OGGM deployment script (e.g. PR34, PR48), and developed OGGM installation tools
Christian Wild (master student, UIBK), participated to the development of the centerline determination algorithm (2014)
Contributing¶
Do you want to contribute to the model? This is the right place to start.
Citing OGGM¶
Publication¶
If you want to refer to OGGM in your publications or presentations, please refer to the paper in Geoscientific Model Development (in review).
BibTeX entry:
@Article{gmd-2018-9,
AUTHOR = {Maussion, F. and Butenko, A. and Eis, J. and Fourteau, K. and
Jarosch, A. H. and Landmann, J. and Oesterle, F. and
Recinos, B. and Rothenpieler, T. and Vlug, A. and
Wild, C. T. and Marzeion, B.},
TITLE = {The Open Global Glacier Model (OGGM) v1.0},
JOURNAL = {Geoscientific Model Development Discussions},
VOLUME = {2018},
YEAR = {2018},
PAGES = {1--33},
URL = {https://www.geosci-model-dev-discuss.net/gmd-2018-9/},
DOI = {10.5194/gmd-2018-9}
}
Software DOI¶
If you want to refer to a specific version of the software you can use the Zenodo citation for this purpose. An example BibTeX entry:
@misc{OGGM_v1.0.0,
author = {Fabien Maussion and Timo Rothenpieler and
Beatriz Recinos and Anouk Vlug and
Ben Marzeion and Felix Oesterle and
Johannes Landmann and Alex Jarosch and
Julia Eis and Anton Butenko and Schmitty Smith},
title = {OGGM/oggm: v1.0.0},
month = jan,
year = 2018,
doi = {10.5281/zenodo.1149701},
url = {https://doi.org/10.5281/zenodo.1149701}
}
Logo¶
The OGGM logos are free to use!

full logo: pdf, large (2004x856), large (2004x856) with alpha channel, small (502x214), small (502x214) with alpha channel

icon logo: small (135x157), small (135x157) with alpha channel
Adding an external module to OGGM¶
Thanks for helping us to make the model better!
There are two ways to add a functionality to OGGM:
The easiest (and recommended) way for small bugfixes or simple functionalities is to add it directly to the main OGGM codebase. In this case, refer to the Contributing to OGGM page for how to do that.
If your endeavor is a larger project (i.e. a fundamental change in the model physics or a new workflow), you can still use option 1 above. However, there might be reasons (explained below) to use a different approach. This is what this page is for.
Why would I add my model to the OGGM set of tools?¶
We envision OGGM as a modular tool to model glacier behavior. We do not want to force anyone towards a certain set-up or paramaterization that we chose. However, we believe that the OGGM capabilities could be useful to a wide range of applications, and also to you.
Finally, we strongly believe in model intercomparisons. Agreeing on a common set of boundary conditions is the first step towards meaningful comparisons: this is where OGGM can help.
Can I use my own repository/website and my own code style to do this?¶
Yes you can!
Ideally, we would have your module added to the main codebase: this ensures consistency of the code and continuous testing. However, there are several reasons why making your own repository could be useful:
complying to OGGM’s strict testing rules can be annoying, especially in the early stages of development of a model
with you own repository you have full control on it’s development while still being able to use the OGGM workflow and multiprocessing capabilities
with an external module the users who will use your model will have to download it from here and you can make sure that correct attribution is made to your work, i.e. by specifying that using this module requires a reference to a specific scientific publication
if your funding agency requires you to have your own public website
etc.
To help you in this project we have set-up a template repository from which you can build upon.
Before writing your own module we recommend to contact us to discuss the best path to follow in your specific case.
Contributing to OGGM¶
All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome! OGGM is still in an early development phase, so most things are not written in stone and can probably be enhanced/corrected/meliorated by anyone!
You can report issues or discuss OGGM on the issue tracker.
Copyright note: this page is a shorter version of the excellent pandas documentation.
Working with the code¶
Before you contribute, you will need to learn how to work with GitHub and the OGGM code base.
Version control, Git, and GitHub¶
The code is hosted on GitHub. To contribute you will need to sign up for a free GitHub account. We use Git for version control to allow many people to work together on the project.
Some great resources for learning Git:
the GitHub help pages.
Matthew Brett’s Pydagogue.
Getting started with Git¶
GitHub has instructions for installing git, setting up your SSH key, and configuring git. All these steps need to be completed before you can work seamlessly between your local repository and GitHub.
Forking¶
You will need your own fork to work on the code. Go to the OGGM project
page and hit the Fork
button. You will
want to clone your fork to your machine:
git clone git@github.com:your-user-name/oggm.git oggm-yourname
cd oggm-yourname
git remote add upstream git://github.com/OGGM/oggm.git
This creates the directory oggm-yourname and connects your repository to the upstream (main project) oggm repository.
Creating a branch¶
You want your master branch to reflect only production-ready code, so create a feature branch for making your changes. For example:
git branch shiny-new-feature
git checkout shiny-new-feature
The above can be simplified to:
git checkout -b shiny-new-feature
This changes your working directory to the shiny-new-feature branch. Try to keep any changes in this branch specific to one bug or feature. You can have many shiny-new-features and switch in between them using the git checkout command.
To update this branch, you need to retrieve the changes from the master branch:
git fetch upstream
git rebase upstream/master
This will replay your commits on top of the lastest oggm git master. If this
leads to merge conflicts, you must resolve these before submitting your pull
request. If you have uncommitted changes, you will need to stash
them prior
to updating. This will effectively store your changes and they can be reapplied
after updating.
Creating a development environment¶
An easy way to create a OGGM development environment is explained in Installing OGGM.
Contributing to the code base¶
Code standards¶
OGGM uses the PEP8 standard. There are several tools to ensure you abide by this standard, and some IDE (for example PyCharm) will warn you if you don’t follow PEP8.
Test-driven development/code writing¶
OGGM is serious about testing and strongly encourages contributors to embrace test-driven development (TDD). Like many packages, OGGM uses the pytest testing system and the convenient extensions in numpy.testing.
All tests should go into the tests
subdirectory of OGGM.
This folder contains many current examples of tests, and we suggest looking to
these for inspiration.
Running the test suite¶
The tests can then be run directly inside your Git clone by typing:
pytest .
The tests can run for several minutes. If everything worked fine, you should see something like:
==== test session starts ====
platform linux -- Python 3.4.3, pytest-3.0.5, py-1.4.31, pluggy-0.4.0
rootdir:
plugins:
collected 92 items
oggm/tests/test_graphics.py ..............
oggm/tests/test_models.py .........s....sssssssssssssssss
oggm/tests/test_prepro.py ...s................s.s...
oggm/tests/test_utils.py ...sss..ss.sssss.
oggm/tests/test_workflow.py ssss
===== 57 passed, 35 skipped in 102.50 seconds ====
You can safely ignore deprecation warnings and other DLL messages as long as
the tests end with OK
.
Often it is worth running only a subset of tests first around your changes before running the entire suite. This is done using one of the following constructs:
pytest oggm/tests/[test-module].py
pytest oggm/tests/[test-module].py:[TestClass]
pytest oggm/tests/[test-module].py:[TestClass].[test_method]
Contributing to the documentation¶
Contributing to the documentation is of huge value. Something as simple as rewriting small passages for clarity is a simple but effective way to contribute.
About the documentation¶
The documentation is written in reStructuredText, which is almost like writing in plain English, and built using Sphinx. The Sphinx Documentation has an excellent introduction to reST. Review the Sphinx docs to perform more complex changes to the documentation as well.
Some other important things to know about the docs:
The OGGM documentation consists of two parts: the docstrings in the code itself and the docs in this folder
oggm/docs/
.The docstrings should provide a clear explanation of the usage of the individual functions (currently this is not the case everywhere, ufortunately), while the documentation in this folder consists of tutorial-like overviews per topic together with some other information (what’s new, installation, etc).
The docstrings follow the Numpy Docstring Standard, which is used widely in the Scientific Python community. This standard specifies the format of the different sections of the docstring. See this document for a detailed explanation, or look at some of the existing functions to extend it in a similar manner.
Some pages make use of the ipython directive sphinx extension. This directive lets you put code in the documentation which will be run during the doc build.
How to build the documentation¶
Requirements¶
There are some extra requirements to build the docs: you will need to
have sphinx
, sphinx_rtd_theme
, numpydoc
and ipython
installed.
If you have a conda environment named oggm_env
, you can install the extra
requirements with:
conda install -n oggm_env sphinx sphinx_rtd_theme ipython numpydoc
If you use pip, activate your python environment and install the requirements with:
pip install sphinx sphinx_rtd_theme ipython numpydoc
Building the documentation¶
So how do you build the docs? Navigate to your local
oggm/docs/
directory in the console and run:
make html
Then you can find the HTML output in the folder oggm/docs/_build/html/
.
The first time you build the docs, it will take quite a while because it has to run all the code examples and build all the generated docstring pages. In subsequent evocations, sphinx will try to only build the pages that have been modified.
If you want to do a full clean build, do:
make clean
make html
Open the following file in a web browser to see the full documentation you just built:
oggm/docs/_build/html/index.html
And you’ll have the satisfaction of seeing your new and improved documentation!
Contributing your changes¶
Committing your code¶
Keep style fixes to a separate commit to make your pull request more readable.
Once you’ve made changes, you can see them by typing:
git status
If you have created a new file, it is not being tracked by git. Add it by typing:
git add path/to/file-to-be-added.py
Doing ‘git status’ again should give something like:
# On branch shiny-new-feature
#
# modified: /relative/path/to/file-you-added.py
#
Finally, commit your changes to your local repository with an explanatory message:
git commit -a -m 'added shiny feature'
You can make as many commits as you want before submitting your changes to OGGM, but it is a good idea to keep your commits organised.
Pushing your changes¶
When you want your changes to appear publicly on your GitHub page, push your forked feature branch’s commits:
git push origin shiny-new-feature
Here origin
is the default name given to your remote repository on GitHub.
You can see the remote repositories:
git remote -v
If you added the upstream repository as described above you will see something like:
origin git@github.com:yourname/oggm.git (fetch)
origin git@github.com:yourname/oggm.git (push)
upstream git://github.com/OGGM/oggm.git (fetch)
upstream git://github.com/OGGM/oggm.git (push)
Now your code is on GitHub, but it is not yet a part of the OGGM project. For that to happen, a pull request needs to be submitted on GitHub.
Review your code¶
When you’re ready to ask for a code review, file a pull request. Before you do, once again make sure that you have followed all the guidelines outlined in this document regarding code style, tests, and documentation. You should also double check your branch changes against the branch it was based on:
Navigate to your repository on GitHub – https://github.com/your-user-name/oggm
Click on
Branches
Click on the
Compare
button for your feature branchSelect the
base
andcompare
branches, if necessary. This will bemaster
andshiny-new-feature
, respectively.
Finally, make the pull request¶
If everything looks good, you are ready to make a pull request. A pull request is how code from a local repository becomes available to the GitHub community and can be looked at and eventually merged into the master version. This pull request and its associated changes will eventually be committed to the master branch and available in the next release. To submit a pull request:
Navigate to your repository on GitHub
Click on the
Pull Request
buttonYou can then click on
Commits
andFiles Changed
to make sure everything looks okay one last timeWrite a description of your changes in the
Preview Discussion
tabClick
Send Pull Request
.
This request then goes to the repository maintainers, and they will review the code. If you need to make more changes, you can make them in your branch, push them to GitHub, and the pull request will be automatically updated. Pushing them to GitHub again is done by:
git push -f origin shiny-new-feature
This will automatically update your pull request with the latest code and restart the Travis-CI tests.
Delete your merged branch (optional)¶
Once your feature branch is accepted into upstream, you’ll probably want to get rid of the branch. First, merge upstream master into your branch so git knows it is safe to delete your branch:
git fetch upstream
git checkout master
git merge upstream/master
Then you can just do:
git branch -d shiny-new-feature
Make sure you use a lower-case -d
, or else git won’t warn you if your feature
branch has not actually been merged.
The branch will still exist on GitHub, so to delete it there do:
git push origin --delete shiny-new-feature
Get in touch¶
View the source code on GitHub.
Report bugs or share your ideas on the issue tracker.
Improve the model by submitting a pull request.
Follow us on Twitter.
Or you can always send us an e-mail the good old way.
License and citation¶

OGGM is available under the open source GNU GPLv3 license.
OGGM is free software. This implies that you are free to use the model and copy or modify its code at your wish, under certain conditions:
When using this software, please acknowledge the original authors of this contribution by linking to our project website www.oggm.org. For your publications, presentations or posters we kindly ask you to refer to the paper in Geoscientific Model Development (in review).
BibTeX entry:
@Article{gmd-2018-9, AUTHOR = {Maussion, F. and Butenko, A. and Eis, J. and Fourteau, K. and Jarosch, A. H. and Landmann, J. and Oesterle, F. and Recinos, B. and Rothenpieler, T. and Vlug, A. and Wild, C. T. and Marzeion, B.}, TITLE = {The Open Global Glacier Model (OGGM) v1.0}, JOURNAL = {Geoscientific Model Development Discussions}, VOLUME = {2018}, YEAR = {2018}, PAGES = {1--33}, URL = {https://www.geosci-model-dev-discuss.net/gmd-2018-9/}, DOI = {10.5194/gmd-2018-9} }
If you want to refer to a specific version of the software you can use the Zenodo citation for this purpose.
An example BibTeX entry:
@misc{OGGM_v1.0.0, author = {Fabien Maussion and Timo Rothenpieler and Beatriz Recinos and Anouk Vlug and Ben Marzeion and Felix Oesterle and Johannes Landmann and Alex Jarosch and Julia Eis and Anton Butenko and Schmitty Smith}, title = {OGGM/oggm: v1.0.0}, month = jan, year = 2018, doi = {10.5281/zenodo.1149701}, url = {https://doi.org/10.5281/zenodo.1149701} }
Your modifications to the code belong to you, but if you decide to share these modifications with others you’ll have to do so under the same license as OGGM (the GNU General Public License as published by the Free Software Foundation).
See the wikipedia page about GPL and the OGGM license for more information.
About¶
- Version
- Citation
- Tests
- License
- Authors
See the version history for a list of all contributors.