optml-group / unlearn-worstcase Goto Github PK
View Code? Open in Web Editor NEW"Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, Sijia Liu
License: MIT License
"Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, Sijia Liu
License: MIT License
Hi there! I am trying to use the worst case unlearn set generator with the retrain mode but getting the following error trace:
Traceback (most recent call last):
File "/home/ppol/Unlearn-WorstCase/data-wise/main_selmu.py", line 216, in <module>
main()
File "/home/ppol/Unlearn-WorstCase/data-wise/main_selmu.py", line 121, in main
unlearn_method(train_full_loader, model, criterion, args, w, mask)
File "/home/ppol/Unlearn-WorstCase/data-wise/unlearn/impl.py", line 313, in _wrapped
train_acc = unlearn_iter_func(
TypeError: w_retrain() takes 7 positional arguments but 8 were given
Command to reproduce the error:
python main_selmu.py --arch resnet18 --dataset cifar10 --cp_path results/cifar10/0model_SA_best.pth.tar --num_indexes_to_replace 4500 --unlearn w_retrain --unlearn_steps 182 --save_dir results/cifar10
Am I specifying the launching command wrong? I tried to follow the data-wise link
In the future I would like to use other unlearning methods besides w_retrain. An example command for each one as in the paper or a config file would have been very helpful.
For reference the resnet checkpoint was obtained with the following command:
python main_train.py --dataset cifar10 --save_dir "results/cifar10"
Hi there! I came upon a missing dependency in Unlearn-WorstCase/data-wise/unlearn/impl.py
named pruner
(l:6, l:9). Does not seem to match the pruner pip package. Is this maybe related to torch-pruning?
For now I commented it out :):)
Thanks in advance !!
Hello!
First and foremost, I'd like to express my appreciation for your outstanding work and the generosity of sharing your code openly. I am currently endeavoring to replicate the results of the data-wise experiments, and I've noticed that there are several hyperparameters involved in the unlearning process, such as unlearn_steps
and theta_lr
. I believe that the selection of these values plays a crucial role in determining the final experimental outcomes.
Could you kindly provide the specific values of these hyperparameters or elaborate on the methodology used to select them? Your assistance in this matter would be greatly appreciated.
Thank you very much.
Not really an issue with the code but an ask. I am trying to reproduce the unlearning evaluations. In appendix B1 you state:
For FT [66], RL [26], EU-k [25], CF-k [25], and SCRUB [39],
the unlearning process takes 10 epochs, during which the optimal learning rate is searched within the range of [10โ4, 10โ1]. It is not super clear how this should be specified in the main_evalmu.py args since the provided example is for retraining. Also, for the random sets, did you use the same learning rate?
Could you please provide "the recipes" on how to reproduce Table 3 with the current version of the code? A bash or txt file with the launching commands for different methods would be veeery helpful.
I tried with the default rate and scrub but seems that the model is diverging on the unlearning process and does not reproduce.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.