Giter Site home page Giter Site logo

climateimpactlab / carleton_mortality_2022 Goto Github PK

View Code? Open in Web Editor NEW
14.0 11.0 7.0 5.76 MB

Supporting material for “ Valuing the Global Mortality Consequences of Climate Change Accounting for Adaptation Costs and Benefits” (Carleton et al. 2022)

License: Other

Stata 32.91% R 13.93% Shell 0.92% Python 5.49% Jupyter Notebook 46.75%
climate-change mortality paper research supplemental support

carleton_mortality_2022's Introduction

Valuing the Global Mortality Consequences of Climate Change Accounting for Adaptation Costs and Benefits

Supporting material for Carleton, Tamma, Amir Jina, Michael T. Delgado, Michael Greenstone, Trevor Houser, Solomon M. Hsiang, Andrew Hultgren, Robert E. Kopp, Kelly E. McCusker, Ishan Nath, James Rising, Ashwin Rode, Hee Kwon Seo, Arvid Viaene, Jiacan Yuan, and Alice Tianbo Zhang, “Valuing the Global Mortality Consequences of Climate Change Accounting for Adaptation Costs and Benefits.” Quarterly Journal of Economics, (2022). https://doi.org/10.1093/qje/qjac020

Description

This repository provides code required to reproduce the tables, figures, and in-text summary statistics in Carleton et al. (2022). This repository's structure mirrors the analysis in the paper, which proceeds in the following six steps.

  1. Data Collection - Historical data on all-cause mortality and climate are cleaned and merged, along with other covariates needed in our analysis (population and income).
  2. Estimation - Econometric analysis is conducted to estimate empirical mortality-temperature relationships for three age groups (<5, 5-64, >64).
  3. Projection - The age-specific empirical mortality-temperature relationships are used to project the impacts of climate change on mortality for 24,378 regions through 2100, accounting for both uncertainty in future climate (through the use of the surrogate mixed model ensemble, or SMME) and statistical uncertainty in the econometric model through Monte Carlo simulation.
    • Note: this step is exceptionally computationally intensive and relies upon Climate Impact Lab's projection system, which is composed of a set of public external repositories. Details on how to link code and data in this repository to the projection system to reproduce all projection results in Carleton et al. (2022) are detailed in the 2_projection/ folder READMEs.
  4. Valuation - Various assumptions regarding the Value of Statistical Life (VSL) are applied to projected impacts on mortality risk, yielding a set of economic damage estimates for all years 2020-2100 in constant 2019 dollars purchasing power parity (PPP). Valuation is performed for all Monte Carlo simulation estimates constructed in Step 3.
  5. Damage Function - Empirical “damage functions” are estimated by relating monetized damages from all Monte Carlo simulations to corresponding Global Mean Surface Temperature (GMST) anomalies from the surrogate mixed model ensemble (SMME).
  6. SCC - Damage functions are used in combination with the simple climate model FAIR to calculate the net present value of future damages associated with an additional ton of carbon dioxide in 2020, which represents a mortality-only partial social cost of carbon under various Representative Concentration Pathways (RCPs) and Shared Socioeconomic Pathways (SSPs).
    • Note: as in Step 3, estimating full uncertainty in the mortality partial SCC (driven both by uncertainty in the damage function and climate uncertainty) is highly computationally intensive and relies on distributed computing to generate future climate simulations. Code for this is included, but it is not advised to run this step without significant computational resources. However, constructing point estimates of the mortality partial SCC and uncertainty just from the damage function are relatively simple to reproduce. Details are provided in the 5_scc/ folder README.

User suitability

Please note that the "Projection" step (step 2) is incredibly computationally intensive, as it computes a set of daily Monte Carlo simulations at the scale of 24,378 geospatial "impact regions". This step can only be feasibly calculated on a computing cluster or using cloud computing resources. Similarly, some components of the "Valuation" step (step 3) are computationally intensive to replicate, as they conduct calculations using all Monte Carlo simulation outputs from step 2.

To ensure users can replicate all other stages of the analysis without directly running the most computationally intensive components, we have included key outputs of the projection step and valuation step as .csv files in the data repository associated with this repo, so that the user does not need to re-generate them. More details are provided in README files within the 2_projecton/ and 3_valuation/ folders.

Folders

The folders in this repository are broadly consistent with the steps outlined above:

0_data_cleaning/ - Code for cleaning and constructing the dataset used to estimate the mortality-temperature relationship.

1_estimation/ - Code for estimating and plotting all mortality-temperature regression models present in the paper.

2_projection/ - Code for running future projections using Climate Impact Lab projection tools, and extracting, summarizing, and plotting the projection output.

3_valuation/ - Code for calculating the VSL based on various assumptions and applying those values to our projected impacts of climate change on mortality risk.

4_damage_function/ - Code for estimating empirical damage functions based upon monetized damages and GMST anomalies.

5_scc/ - Code for applying a CO2 pulse from the FAIR simple climate model to global damage functions, and summing damages over time to calculate mortality partial SCCs.

For run instructions on each step of the analysis, refer to the README files located within the corresponding directories.

Setup

Requirements For Using Code In This Repo

  1. You need to have python, Stata, and R programming capabilities, or at least environments to run code in these languages, on your computer.

  2. We use conda to manage python environments, so we recommend installing conda if you haven't already done so following these instructions.

Setup Instructions

  1. Clone the following repos to a chosen directory, which we'll call yourREPO from now onwards, with the following commands:
cd <yourREPO>
git clone https://github.com/ClimateImpactLab/carleton_mortality_2022.git
  1. Install the conda environment included in this repo by running the following commands under the root of this repo:
cd <yourREPO>/carleton_mortality_2022
conda env create -f mortalityverse.yml

Try activating the environment:

conda activate mortalityverse

Please remember that you will need to activate this environment whenever you run python scripts in this repo, including the pip install -e . commands in the following section.

Also, you need to install Jupyter for the scc calculation code

conda install -c conda-forge jupyterlab
  1. Download data either from Zenodo or from the QJE Dataverse and unzip it somewhere on your machine with at least 85 GB of space. Let's call this location yourDATA.

  2. Set up a few environment variables so that all the code runs smoothly.

Append the following lines to your ~/.bash_profile.

First, run:

nano ~/.bash_profile

Then, point the variable DB in the yourDATA dierctory in the downloaded data, and do the same for OUTPUT. Point the REPO variable to yourREPO path used above containing this repo and other repos by adding the following lines to .bash_profile:

export REPO=<yourREPO>
export DB=<yourDATA>/data
export OUTPUT=<yourDATA>/output
export LOG=<yourDATA>/log

Save and exit. Then, run source ~/.bash_profile to load the changes we just made.

  1. Setup for the whole repo is complete! Please follow the READMEs in each subdirectory to run each part of the analysis. In general, each directory will contain one or more staging files where individual analysis or output producing scripts can be run from in one go. Before running, it is recommended that users review and set the TRUE/FALSE toggles to produce the desired set of outputs. More detail is available in the section READMEs.

carleton_mortality_2022's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

carleton_mortality_2022's Issues

Mapping of GADM regions to those used in your paper...

Hello, I am in a project that is trying to use your data, but we need first to understand the mapping of the regions you employed.

For example for Hungary GADM.org has 20 regions:
image

However in your final dataset there are only 7:
image

How can we map GADM regions to "your" regions ? Do you have a mapping matrix, or even better a shapefile of the final admin regions you used ?
I realise you provided a detailed explication in the Appendix C of your paper and some scripts that should perform the mapping in the repository, but I don't have STATA expertice and I am not sure to have undertstood well all the phases, so if you have the map matrix or the final shapefile, it would be highly appreciated :-)

Thank you!
Antonello Lobianco, BETA Nancy

Replicating on Newer Mac- Python 3.7.6 Not Available

Hello,
Per the detailed instruction in the read.me, I tried to set up the mortalityverse.yml environment. However, many of the packages in this environment are out of date and can no longer be installed. In particular, python 3.7 is no longer supported by modern Macs. Revising the .yml file to use more updated versions also does not work due to package conflicts. Has anyone tried replicating this project recently on a Mac?

Questions

I have a couple of questions that I would like to ask:

It seems like the damage coefficients used in the mortality report is in this file: mortality_damage_coefficients_quadratic_IGIA_MC_global_poly4_uclip_sharecombo_SSP3.csv

Question 1: In this file, what does heterogeneity, cons, beta1, beta2, anomalymin, and anomalymax mean?

Question 2: If I am correct that beta1 and beta2 are damage coefficients for mortality, how do I convert these beta1 and beta2 in the CSV file above to the beta1 and beta2 used in the DSCIM (specifically, DSCIM/input/damage_functions/mortality_v1 NC4 files based on discount rates).

For instance, how do I convert beta1 and beta2 in the CSV file above to the beta1 and beta2 in risk_aversion_euler_ramsey_eta1.016_rho0.0_dfc.nc4?

Question 3: Is it possible to change the RCP scenarios for carleton_mortality code?

If yes, then how would I be able to do this?

Do I specify RCP scenarios in 2_projection/2_run_projections/main_specification/configs/mortality-generate-montecarlo.yml? And is this the documentation for how to do this? https://github.com/ClimateImpactLab/impact-calculations/blob/master/docs/generate.md

Thank you in advance for your time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.