Giter Site home page Giter Site logo

yirogue / policy-toolkit Goto Github PK

View Code? Open in Web Editor NEW

This project forked from wri/policy-toolkit

1.0 1.0 0.0 406.87 MB

Massively multitask fine tuning of roBERTa for policy priority identification in unstructured text

License: MIT License

Makefile 3.35% Python 1.32% Jupyter Notebook 94.87% Dockerfile 0.46%

policy-toolkit's Introduction

Policy-toolkit

This repository contains code and data for the Restoration Research & Monitoring team's initiative to automate the identification of financial incentives and disincentives across policy contexts.

Notebooks

The notebooks folder contains Jupyter and RMarkdown notebooks for setting up the environment, preprocessing data, and performing manual and automatic data labeling.

  • 1-environment-setup: Set up jupyter environment (alternative to Docker)
  • 2-extract-transfer-load: Extract text and disaggregate to paragraphs
  • 3-data-labelling: Manual gold standard data creation
  • 4-automatic-data-labeling: Automatic data labeling with data programming in Snorkel
  • 5-roberta-classification: Embed paragraphs as features with roBERTa model
  • 6-end-model: Train a noise-aware end model with snorkel metal label classifier output

Data

The data folder contains data at each stage of the pipeline, from raw to interim to processed. Raw data are simply PDFs of policy documents. The ETL pipeline results in two .csv files. The gold_standard.csv contains ~1,100 paragraphs labeled manually, and the noisy_labels.csv contains ~16,000 paragraphs (soon to be >30,000) labeled with Snorkel.

  • gold_standard.csv: ID, country, policy, page, text, class
  • noisy_labels.csv: ID, country, policy, page, text, (class distributions)
  • snorkel_noisy_proba.csv: class distributions ([neutral, negative, positive]) to join to noisy_labels.csv. Shape is (nrow noisy_labels, 3).

Modeling ethos

This project uses data programming to algorithmically label training data based on a small, hand-made gold standard. Soft labels are assigned as probability distributions of label likelihood based on the weak algorithmic labels. These soft labels are used in a soft implementation of cross entropy.

Models are trained with algorithmically labeled samples and evaluated on the gold standard labels. The current pipeline is noisy labeling -> roBERTa encoding -> LSTM.

Future iterations will fine tune roBERTa, add additional feature engineering, and update the noisy labeling process.

Roadmap

Priorities for WRI team

  • Second validation for gold standard
  • Refine snorkel data programming
  • Make the workflow from notebook to notebook more clear

Priorities for Columbia team

  • Pilot implementation of BabbleLabble link
  • Additional feature engineering including:
    • SpaCy dependency parsing
    • Named entity recognition
    • Topic modeling
    • Universal sentence encoder
    • Hidden markov model
    • DBPedia linking
  • Data augmentation with synonym replacement link
  • Model augmentation with slicing functions link
  • Fine tune roBERTa on noisy labels
  • Massive multi task learning with snorkel 0.9
  • Named entity disambiguation from positive class paragraphs: (finance_type, finance_amount, funder, fundee)

References

Project Organization

├── LICENSE
├── Makefile           <- Makefile with commands like `make data` or `make train`
├── Dockerfile         <- Dockerfile to create environment
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │   │                 predictions
│   │   ├── predict_model.py
│   │   └── train_model.py
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
└── tox.ini            <- tox file with settings for running tox; see tox.testrun.org

Project based on the cookiecutter data science project template. #cookiecutterdatascience

policy-toolkit's People

Contributors

johnmbrandt avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.