Giter Site home page Giter Site logo

brazil_gfw_emission_comparison's Introduction

This script applies the same criteria that Brazil uses for calculating its annual Amazon deforestation statistics to the UMD/Hansen loss. The criteria are that loss must be: 1) within the legal Amazon, 2) within PRODES primary forest for that year, 4) without fire that year or the preceding year, and 4) in a cluster greater than 6.25 ha. Applying these criteria to annual Hansen loss allows for more direct comparison of loss area and resulting emissions with Brazil's official statistics than a direct comparison.

Run Brazil_GFW_emissions_comparison.py on a local computer. This requires ArcPy and several input files: Hansen annual loss tiles for the legal Amazon, annual burned area tiles, PRODES primary forest extent for each year of analysis, and a legal Amazon boundary shapefile. As each criterion is applied, a new tif and shapefile are created. The creation of a shapefile following the application of each criterion means that the effect on loss and emissions of each criterion can be examined. For example, how much does restricting loss to areas that did not burn affect the loss/emissions total? This can take a few hours to run for each year of loss analyzed.

Copy the outputs (tifs and shapefiles) to s3. The tifs are not used for furthe analysis but are good for visualization.

After shapefiles for all years are created, the next step is to create a field with in each shapefile with the filename. This is done with prep_for_tsv_creation.py. This field with the name of the file will be used in Hadoop to identify the which shapefile the results are for.

The next step is to convert the shapefiles to tsvs without intersecting them with administrative boundaries (GADM). This project does not require knowing the loss/emissions by administrative region. This can be done with convert-AOI-to-tsv.py in https://github.com/wri/gfw-annual-loss-processing.

Finally, the tsvs are put through Hadoop (https://github.com/wri/gfw-annual-loss-processing/tree/master/1c_Hadoop-Processing) and post-processed (https://github.com/wri/gfw-annual-loss-processing/blob/master/2_Cumulate-Results-and-Create-API-Datasets/cumsum_hadoop_output.py).

For more information, refer to the Word document in this repo.

brazil_gfw_emission_comparison's People

Contributors

dagibbs22 avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.