Giter Site home page Giter Site logo

fairness_tutorial's Introduction

Presenters

  • Rayid Ghani, Carnegie Mellon University
  • Kit T. Rodolfa, RegLab, Stanford University
  • Pedro Saleiro, Feedzai
  • Sérgio Jesus, Feedzai

Earlier versions:

Why this tutorial?

Tackling issues of bias and fairness when building and deploying machine learning and data science systems has received increased attention from the research community in recent years, yet most of the research has focused on theoretical aspects with a very limited set of application areas and data sets. Today, we have a lack of:

  1. Practical training materials
  2. Methodologies to follow when building ML/data science systems that are fair and equitable for people that are affected by them
  3. Tools for researchers and developers working on real-world, ML-based decision-making system to deal with issues of bias and fairness.

Today, treating bias and fairness as primary metrics of interest, and building, selecting, and validating models using these metrics is not standard practice for data scientists. This tutorial is a step towards changing that.

What will we cover?

In this hands-on tutorial we will bridge the gap between research and practice, by exploring fairness at the systems and outcomes level, from metrics and definitions to practical case studies, including bias audits (using the Aequitas toolkit) and the impact of various bias reduction strategies. By the end of this hands-on tutorial, the audience will be familiar with bias audit and reduction frameworks and tools that will help them make informed design choices guided by the contexts in which their system will be deployed and used.

Pre-Requisites

  • Programming (in Python).
  • Machine Learning background (understanding of and experience building ML models).
  • Caring about the world, fairness, and equity.

Schedule and Structure

Google Slides

Interactive versions hosted on colab

Static jupyter notebooks

  1. Overall fairness and equity when building Data Science/ML systems

  2. From societal goals to fairness goals to ML fairness metrics

  3. Audit bias and fairness of an ML-based decision-making system

  4. Explore bias reduction strategies

  5. Wrap-Up

    • Things to remember
    • Additional tools and resources

Resources

References

Bias Reduction Papers

  • André Cruz, Catarina Belém, João Bravo, Pedro Saleiro, and Pedro Bizarro. FairGBM: Gradient Boosting with Fairness Constraints. 11th International Conference on Learning Representations, ICLR 2023.

  • Alekh Agarwal, Alina Beygelzimer, Miroslav Dudfk, John Langford, and Hanna Wallach. A reductions approach to fair classification. 35th International Conference on Machine Learning, ICML 2018, 1:102–119, 2018.

  • Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. 26th International World Wide Web Conference, WWW 2017, pages 1171–1180, 2017.

  • Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. Fairness constraints: Mechanisms for fair classification. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 54, 2017.

  • L. Elisa Celis, L. Huang, V. Keswani, N. K. Vishnoi, Classification with fairness constraints: a meta-algorithm with provable guarantees. Proceedings of the Conference on Fairness, Accountability, and Transparency (ACM, 2019), pp. 319–328.

  • Andrew Cotter, Maya Gupta, Heinrich Jiang, Nathan Srebro, Karthik Sridharan, Serena Wang, Blake Woodworth, and Seungil You. Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 1397–1405, Long Beach, California, USA, jun 2019. PMLR.

Post Modeling Correction Papers

  • Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On Fairness and Calibration. In I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, and R Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5680–5689. Curran Associates, Inc., 2017.

  • Moritz Hardt, Eric Price, and Nathan Srebro. Equality of Opportunity in Supervised Learning. Advances in Neural Information Processing Systems, (Nips):1–22, 2016.

  • Kit T Rodolfa, Erika Salomon, Lauren Haynes, Iván Higuera Mendieta, Jamie Larson, and Rayid Ghani. Case study: predictive fairness to reduce misdemeanor recidivism through social service interventions. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 142–153, 2020.

Case Studies

  • Alexandra Chouldechova, Diana Benavides-Prado, Oleksandr Fialko, and Rhema Vaithianathan. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Conference on Fairness, Accountability and Transparency, pages 134–148, 2018

  • Kit T Rodolfa, Erika Salomon, Lauren Haynes, Iván Higuera Mendieta, Jamie Larson, and Rayid Ghani. Case study: predictive fairness to reduce misdemeanor recidivism through social service interventions. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 142–153, 2020.

Acknowledgements

  • Aaron Dunmore
  • Beatriz Malveiro
  • Catarina Belém
  • David Polido

fairness_tutorial's People

Contributors

rayidghani avatar saleiro avatar sgpjesus avatar shaycrk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.