Giter Site home page Giter Site logo

birds's People

Contributors

ck-amrahd avatar kcdharmaua avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

birds's Issues

Questions about optimal values of the lambdas, data split and epsilon values

Hi Dharma,

Your work and code look amazing to me, so I was trying to repeat your experiment. I basically can run the model training through, but about the detailed parameter optimization and validation, I got following questions:

  1. In split.py, only a quarter of all data were saved for training, validation and testing, by
  • for img_name in train_images[::4]:
  • for img_name in train_val_images[::4]:
  • for img_name in val_images[::4]:
  • for img_name in test_images[::4]:

which gave me around 1590 images for training and around 530 for each testing and validation. However, when I trained and validated the model with a quarter of all data like this, I could not get the same accuracy as you listed in the paper. So I changed the code into following:

  • for img_name in train_images:
  • for img_name in train_val_images:
  • for img_name in val_images:
  • for img_name in test_images:

which gave me around 5914 training data, and around 1971 for each testing and validation. Then I got the similar validation accuracy as you showed in the paper. Did I do the correct thing? Please let me know if it was wrong, thank you very much!

  1. What were your optimal choices of lambda_1 and lambda_2 with different epsilons, please? I tried lambda_1=lambda_2=1 when epsilon=0 using 5914 training images, from which I can get similar accuracy as it is in your paper (around 75%). But it would be awesome if you could let me know more details about the optimal choices of lambdas with different epsilon values.

  2. I also validated the pretrained "normal" model (train_method = 'normal') with different epsilons (adversarial perturbation radii). However, I could not get any similar accuracy as in your paper. For example, if I validate the pretrained "normal" model with epsilon=0.175 , the validation accuracy I got was only around 30%, while in the paper the validation accuracy of epsilon=0.175 should be around 52% for the "normal" model. Same thing happened to the "bbox" model as well, where I got a 35% validation accuracy using epsilon=0.175 and lambda_1=lambda_2=1, but in the paper the validation accuracy of "lambda equal" model with epsilon=0.175 should be around 65%. However, when I validated the model with epsilon=0.0025, I can get a similar results corresponding to that of epsilon=0.175 in your paper. The following is the code I used for the robust accuracy validation, could you please kindly let me know if there is anything wrong?

import torch
import torch.nn as nn
from torchvision import transforms
import foolbox as fb
import numpy as np
from torchvision import datasets, models
import pickle
import matplotlib.pyplot as plt

model_path = "/results/resnet50/normal_1_1.pth"
val_dataset_path = '/data/val'
epsilon = 0.175

num_classes = 200
device = torch.device('cuda')

transform = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

val_dataset = datasets.ImageFolder(val_dataset_path, transform)
val_loader = torch.utils.data.DataLoader(val_dataset,
                                         batch_size=64,
                                         shuffle=False,
                                         num_workers=0)

bounds = (0, 1)
print(f'Running Attacks...')
model = models.resnet50(pretrained=False)
input_features = model.fc.in_features
model.fc = nn.Linear(input_features, num_classes)
model.load_state_dict(torch.load(model_path))

model.eval()
fmodel = fb.PyTorchModel(model, bounds=bounds)
attack = fb.attacks.FGSM()

robust_acc_list = []
for inputs, labels in val_loader:
    inputs, labels = inputs.to(device), labels.to(device)
    _, _, is_adv = attack(fmodel, inputs, labels, epsilons=epsilon)
    robust_acc = 1 - is_adv.float().mean(axis=-1)
    robust_acc_list.append(robust_acc.cpu().numpy())

avg_acc = np.mean(robust_acc_list)
print(avg_acc)

Thank you in advance for your kind help and time!!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.