Comments (26)
I've had a pytorch implementation lingering around for some time on my hard drive. I've just polished it up a bit (hope it's readable at all...) and wrote a few docs to go along with it. You can find it here: https://github.com/nimarb/pytorch_influence_functions
It doesn't implement all the graphics, tests, examples of the original paper - just the algo itself.
from influence-release.
I've now created a private repository with my current status and invited @tengerye. If anyone else is interested in having a look, just let me know.
from influence-release.
Hello,
@expectopatronum I don't think I saw any email (sorry if I missed it). But thanks @tengerye for answering it.
This repo is frozen to what was used for the paper. I'm glad that there's interest in making a Pytorch version; thank you and good luck! In case it helps, we have a more recent paper that also uses influence functions, and the code there is cleaner and easier to read: https://github.com/kohpangwei/group-influence-release
from influence-release.
I've now created a private repository with my current status and invited @tengerye. If anyone else is interested in having a look, just let me know.
hi,@expectopatronum,i am also interested in PyTorch implementation of the paper,could you share me with your code?Thanks.
from influence-release.
@expectopatronum I'm also very interested in the Pytorch implementation, could you also share your code with me as well? It'd be a fantastic help!
from influence-release.
@expectopatronum I'm also looking for the pytorch implementation of influence functions! It'll be very helpful if you share your codeπ
from influence-release.
Wonderful work, I would also consider it be done by pytorch
from influence-release.
I do wish the pytorch version could be released as soon as possible.
from influence-release.
Hi, what is progress now? I would like to join.
from influence-release.
I was able to reproduce the hospital readmission notebook experiments in Pytorch with a few issues:
- The bar charts are similar (so it returns the influential samples in the same/correct order) but the computed influence values are all too large (all by the same factor). I am not yet sure whether the error is in my loss functions, some missing scaling, ...
- And the second thing is that is super slow (about 35 times slower than the TF implementation), so far I didn't find a solution for that (from profiling it looks like it might be the DataLoaders that are slow).
Since I couldn't get it to run in reasonable time and some things from the original implementation are unclear to me (I sent an email to the first author of the paper but I haven't received an answer yet) I have moved on to other interpretability methods.
My code is messy so I didn't put it online. If someone is interested in helping me - feel free to contact me, I'd like to give it another shot.
from influence-release.
@expectopatronum Hi, I am working on the first experiment by translating the TensorFlow code to PyTorch, it is difficult though. I would like to help and work on it together.
What is your approach? Do you translate the codes file by file or organize them by yourselves?
from influence-release.
First I tried to translate the code file by file but I think how Pytorch and Tensorflow work is too different. I also want the influence code extracted from the model, so I put it in a separate file. In the end I want it to work for every model and not copy the code to all models.
I also tried to figure out which parts are actually used (in the example) and only implement those (for now). E.g. in the hospital_readmission example (which I use to test my implementation) they pass test_indices
, so I currently don't care about the part of the function that deals with the case that this is None.
I will put my code on Github in the next couple of days and share it with you - maybe we can solve it together.
Here is one of the questions I asked the author, maybe you have an answer to this:
-
The function update_feed_dict_with_v_placeholder is not clear to me. First you fill the feed_dict with a batch of the data (https://github.com/kohpangwei/influence-release/blob/master/influence/genericNeuralNet.py#L496) and afterwards you seem to update this batch with 'cur_estimate'. What does the feed_dict look like at this stage?
a) Is the input replaced by v? Is the prediction computed on v or input?
b) Or is v added to the feed_dict and it now contains input, label and v?
from influence-release.
@expectopatronum
The function update_feed_dict_with_v_placeholder
just try to insert a pair of placeholder in v_placeholder
and corresponding values into the feed_dict
. The key in feed_dict
is the tensor and value is corresponding value.
Hope it can help. By the way, may I ask when do you think that your code will be ready online?
from influence-release.
Alright, thanks!
I am currently working on it, so I'd expect it to be ready in a couple of hours.
from influence-release.
Hi @expectopatronum, just stumbled upon this...I'm also working on a currently unreleased PyTorch implementation of the paper, feel free to reach out...
from influence-release.
kohpangwei seems not really care about this repository anymore, what a shame
from influence-release.
Hi @kohpangwei, that's strange. I used the email adress from your influence paper, is that still valid? I still have some theoretical questions about the paper that probably can not be answered by someone on Github.
I am aware of the new paper, I didn't have time yet to check it out but I will soon :)
Thanks a lot!
from influence-release.
Yup, that email address still works! Feel free to drop me a note there. :)
from influence-release.
Thanks, I did! Hopefully it won't get lost this time :)
from influence-release.
Hi, @expectopatronum @Kunlun-Zhu @markus-beuckelmann has anyone successfully repeat the experiment of CNN (fig2-c) successfully yet? Although the paper states that the methods works well with non-convergence case but I found I can never make the get_inverse_hvp_cg
convergence. The original code achieves 0.9996 on training CNN and 0.9746 on test set. In my case, it is 0.9325 and 0.8972 respectively.
I guess it must be related to the damping term.
@kohpangwei If possible, would you please share some experience in how to determine if the training is good for the next step? e.g., did you check eigenvalues of hessian inverse?
from influence-release.
Hi @tengerye, unfortunately not. I have given up for now since I didn't even manage to exactly reproduce the hospital notebook (and it is super slow in my Pytorch implementation). Would you like to share your code?
from influence-release.
Yup, checking the eigenvalues of the Hessian was a helpful diagnostic, and damping it "appropriately" (to make sure it's PSD) is important in the non-convex case. Increasing L2 regularization can also be helpful.
from influence-release.
@kohpangwei Thank you for your kind reply. @expectopatronum Sharing is the reason for me to produce it. Allow me a few days to fix the problem before making it public.
from influence-release.
@nimarb This is amazing, thanks for sharing! If you don't implement stuff from the paper - how do you know if it is correct? (not saying that everything in the paper must be correct)
from influence-release.
Initially, I recreated the Inception and adversarial use-cases (were most interesting for my use) where I got the same images for helpful data points. I hope to find the time to put those out over the christmas holidays :)
from influence-release.
Closing this thread; thanks @nimarb for the implementation. :)
from influence-release.
Related Issues (20)
- Perturbation to find influential features in training data HOT 6
- Computing influence on a CNN HOT 2
- Need help: what does `set_params_op` do? HOT 2
- How can we implement this method on LSTM HOT 4
- Why the hessian vector product is calculated by mini-batch? HOT 3
- Maybe a bug: retrain without re-initialization? HOT 5
- Error in influence calculation for spam experiment HOT 2
- Upweighting a training point vs Perturbing a training input HOT 2
- Mismatch between IHVP computation in code and paper? HOT 1
- why does inverse_hvp / scale after the iteration in get_inverse_hvp_lissa HOT 1
- Minor error in documentation HOT 3
- Typo in paper HOT 1
- Local issue running Hospital Readmission notebook HOT 3
- Adapting this approach to regressions tasks not just classification HOT 1
- Influence value for RNN models HOT 1
- Can't understand what function "get_inverse_hvp_lissa" do HOT 1
- How much system memory is required in this source ? HOT 2
- How is the damping term determined? HOT 2
- How to Computing Influence Function HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from influence-release.