Giter Site home page Giter Site logo

openbct's People

Contributors

yantaoshen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

openbct's Issues

question about --old-fc flag

When i try to train a new model with infulence loss (old classifier regularization), i cant find "your_old_fc_weights_dir". Could you plz tell me where i can find the "fc_weights_dir"?

Issue when using --use-feat and --old-fc flags

Thanks for sharing this code for reproduction. I'm having an issue when using the flags --use-feat and --old-fc when doing L2 feature regularization. The issue is that is --use-feat is on, then the resnet returns a single feature vector. However, when old-fc is not None, then the code expects model(image) to have three outputs, namely output, old_output, output_feat = model(images), which becomes a contradiction. What's the intended behavior here?

Is it typo that the metric of IJB-C 1:N is TNIR(%)@FPIR=10^-2 in the paper?

Hi, it's me again LOL, sorry for so many questions.

In the paper you said that the metric for IJB-C 1:N is TNIR(%)@ FPIR. But to the best of my knowledge, TNIR is equal to 1-FPIR.
Maybe you mean TPIR(%)@ FPIR=10^-2, which is also known as DET(Decision Error Trade-off) curve, as in this paper ?

By the way, for the open-set test protocol, should I calculate the metric for probe-G1 and probe-G2, separately, and then average them?

The result of L2 Loss in the paper is zero, which is different from my re-implementation result.

Thanks for your great job~But I am confused with the result of L2 Loss in the paper.

In the paper you reported that L2 Loss is not effective and the cross model result is 0. But in my re-implementation (IMDb-face, IJBC), I found that the result of L2 Loss is close to influence loss, which is zero at the beginning, but increase after several epochs. May I ask how many epochs did you train the L2 Loss ? (We followed the experimental setting you mentioned in the paper, except that we only have 30% IMDb-face data and we set the old training set as 15% IMDb-face.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.