Giter Site home page Giter Site logo

pc98 / xensemble-1.0 Goto Github PK

View Code? Open in Web Editor NEW

This project forked from git-disl/xensemble-1.0

0.0 1.0 0.0 175.07 MB

Code for the XEnsemble Robust Deep Learnning project

Python 89.62% MATLAB 1.61% Shell 0.03% HTML 0.19% TypeScript 8.50% CSS 0.04%

xensemble-1.0's Introduction

Introduction

XEnsemble is an advanced robust deep learning package that can defend both adversarial examples and out-of-distribution input(to-be-updated). The intuition behind is the input and model divergence of these attack inputs [1,5].

XEnsemble now supports four datasets: MNIST, CIFAR-10, ImageNet and LFW.

Installation

Note: This code has been tested on Python 3.7.6

  1. Create a virtual environment and activate it.
  2. Clone this repo, and install dependencies: pip install -r requirements.txt. If keras-contrib has been accidentally added to requirements.txt you might get an error. Remove it from the file and see the following instruction.
  3. Install keras-contrib: pip install git+https://www.github.com/keras-team/keras-contrib.git.

How to run

With GUI

Please refer to instructions in frontend/README.md

Without GUI

  1. main_attack_portal.py: please read the PDF file for more details of attacks.
python main_attack_portal.py --dataset_name MNIST --model_name CNN1 --attacks \
"fgsm?eps=0.3;bim?eps=0.3&eps_iter=0.06;deepfool?overshoot=10;pgdli?eps=0.3;\
fgsm?eps=0.3&targeted=most;fgsm?eps=0.3&targeted=next;fgsm?eps=0.3&targeted=ll;\
bim?eps=0.3&eps_iter=0.06&targeted=most;\
bim?eps=0.3&eps_iter=0.06&targeted=next;\
bim?eps=0.3&eps_iter=0.06&targeted=ll;\
carlinili?targeted=most&batch_size=1&max_iterations=1000&confidence=10;\
carlinili?targeted=next&batch_size=1&max_iterations=1000&confidence=10;\
carlinili?targeted=ll&batch_size=1&max_iterations=1000&confidence=10;\
carlinil2?targeted=most&batch_size=100&max_iterations=1000&confidence=10;\
carlinil2?targeted=next&batch_size=100&max_iterations=1000&confidence=10;\
carlinil2?targeted=ll&batch_size=100&max_iterations=1000&confidence=10;\
carlinil0?targeted=most&batch_size=1&max_iterations=1000&confidence=10;\
carlinil0?targeted=next&batch_size=1&max_iterations=1000&confidence=10;\
carlinil0?targeted=ll&batch_size=1&max_iterations=1000&confidence=10;\
jsma?targeted=most;\
jsma?targeted=next;\
jsma?targeted=ll;"
  1. input_denoising_portal.py: please read the PDF file for more details of the available input denoising method.
python input_denoising_portal.py --dataset_name MNIST --model_name CNN1 --attacks "fgsm?eps=0.3" --input_verifier "bit_depth_1;median_filter_2_2;rotation_-6"
  1. cross_layer_defense.py: please read the PDF file for more details of available choice of models. More diversity ensemble details can be found in the paper.
python cross_layer_defense.py --dataset_name MNIST --model_name cnn1 --attacks "fgsm?eps=0.3" --input_verifier "bit_depth_1;median_filter_2_2;rotation_-6" --output_verifier "cnn2;cnn1_half;cnn1_double;cnn1_30;cnn1_40"
  1. detection_only_comparison.py: please read feature squeezing, MagNet, and LID papers for implementation details.
python detection_only_comparison.py --dataset_name MNIST --model_name CNN1 --attacks "fgsm?eps=0.3;bim?eps=0.3&eps_iter=0.06;carlinili?targeted=next&batch_size=1&max_iterations=1000&confidence=10;carlinili?targeted=ll&batch_size=1&max_iterations=1000&confidence=10;carlinil2?targeted=next&batch_size=100&max_iterations=1000&confidence=10;carlinil2?targeted=ll&batch_size=100&max_iterations=1000&confidence=10;carlinil0?targeted=next&batch_size=1&max_iterations=1000&confidence=10;carlinil0?targeted=ll&batch_size=1&max_iterations=1000&confidence=10;jsma?targeted=next;jsma?targeted=ll;" --detection "FeatureSqueezing?squeezers=bit_depth_1&distance_measure=l1&fpr=0.05;FeatureSqueezing?squeezers=bit_depth_2&distance_measure=l1&fpr=0.05;FeatureSqueezing?squeezers=bit_depth_1,median_filter_2_2&distance_measure=l1&fpr=0.05;MagNet"

XEnsemble project

We are continuing the development and there is ongoing work in our lab regarding adversarial attacks and defenses. If you would like to contribute to this project, please contact Wenqi Wei.

If you use our code, you are encouraged to cite:

[1]@article{wei2018adversarial,
  title={Adversarial examples in deep learning: Characterization and divergence},
  author={Wei, Wenqi and Liu, Ling and Loper, Margaret and Truex, Stacey and Yu, Lei and Gursoy, Mehmet Emre and Wu, Yanzhao},
  journal={arXiv preprint arXiv:1807.00051},
  year={2018}
}


[2]@inproceedings{liu2019deep,
  title={Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness},
    author={Liu, Ling and Wei, Wenqi and Chow, Ka-Ho and Loper, Margaret and Gursoy, Mehmet Emre and Truex, Stacey and Wu, Yanzhao},
  booktitle={The 16th IEEE International Conference on Mobile Ad-Hoc and Smart Systems.},
year={2019},
  publisher = {IEEE},
  address = {}
}


[3]@inproceedings{chow2019denoising,
  title={Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks," IEEE International Conference on Big Data},
  author={Chow, Ka-Ho and Wei, Wenqi and Wu, Yanzhao and Liu, Ling},
  booktitle={Proceedings of the 2019 IEEE International Conference on Big Data},
  year={2019},
  organization={IEEE}
}


[4]@inproceedings{wei2020cross,
  title={Cross-Layer Strategic Ensemble Defense Against Adversarial Examples.},
  author={Wei, Wenqi and Liu, Ling and Loper, Margaret and Chow, Ka-Ho and Gursoy, Mehmet Emre and Truex, Stacey and Wu, Yanzhao},
  booktitle={International Conference on Computing, Networking and Communications(ICNC)},
  year={2020}
}

We have another two papers under review.

[5]Wenqi Wei, Ling Liu, Margaret Loper, Mehmet Emre Gursoy, Stacey Truex, Lei Yu, and Yanzhao Wu, "Demystifying Adversarial Examples and Their Adverse Effect on Deep Learning", under the submission of IEEE Transaction on Dependable and Secure Computing.
[6]Wenqi Wei, and Ling Liu, "Robust Deep Learning Ensemble against Deception", under the submission of IEEE Transaction on Dependable and Secure Computing.

Special Acknowledgment

The code package is built on top of the EvadeML. We specially thank the authors. We also thank authors in Cleverhans, Carlini&Wagner attacks, PGD attacks, MagNet, universal (and DeepFool) attacks, keras models and those implemented neural network models with trained weights.

xensemble-1.0's People

Contributors

pc98 avatar aryamanvinchhi avatar wenqiwei789 avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.