Giter Site home page Giter Site logo

sayakpaul / adversarial-examples-in-deep-learning Goto Github PK

View Code? Open in Web Editor NEW
13.0 2.0 3.0 63.48 MB

Shows how to create basic image adversaries, and train adversarially robust image classifiers (to some extent).

Jupyter Notebook 100.00%
tensorflow neural-structured-learning adversarial-training keras adversarial-examples

adversarial-examples-in-deep-learning's Introduction

Adversarial Examples in Deep Learning

Deep Learning has brought us tremendous achievements in the field of Computer Vision. In spite of the impeccable success, modern Deep Learning systems are still prone to adversaries. Let's talk in terms of Computer Vision. Consider an image of a polar bear and an instance of it (X1). A Deep Learning-based image classifier is able to successfully X1 as a polar bear. Now consider another instance of a polar bear X2 which is a slightly perturbed version of X1. To the human eyes, it would still be a polar bear but for that same image classifier, it would be an ant. These perturbations are referred to as image adversaries.

This repository contains code for a short crash-course related adversarial examples in deep learning. The crash course would include introduction to adversarial examples, training models that are adversarial-aware, situations where adversarial-aware models could fail, and so on.

The crash course would be presented in form of Weights and Biases reports. The first report in this line is now up -

Contents (to be updated):

  • Image_Adversaries_Basics.ipynb: Shows how to create adversaries that can fool a ResNet50 model pre-trained on ImageNet. Includes both vanilla and targeted attacks.
  • Adversarial_Training_NSL.ipynb: Shows how to train adversarially robust image classifiers using Neural Structured Learning.
  • GANs_w_Adversaries.ipynb: Shows how to incorporate GANs (plain old DCGAN) to tackle adversarial situations.
  • Optimizer_Susceptibility.ipynb: Studies the susceptibility of different optimizers against simple attacks.
  • Optimizer_Susceptibility_Targeted_Attacks.ipynb: Studies the susceptibility of different optimizers against targeted attacks.

Note: The materials are strictly for learning purpose and should not be considered for production systems.

Coded in:

  • TensorFlow 2.x (at time of writing Google Colab had TensorFlow 2.3.0)

References:

adversarial-examples-in-deep-learning's People

Contributors

sayakpaul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.