Giter Site home page Giter Site logo

pa-da's People

Contributors

shunlu91 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pa-da's Issues

Questions about probabilistic shape

Respected researchers
I'm sorry to disturb you. I still have a couple of small doubts about PA-DA that I would like to ask you about, thank you very much!

  1. Why is an array of [0.2, 0.2, 0.2, 0.2, 0.2, 0.2] used to control the probability of sampling each edge in B-201 instead of an array like shape(5,6)? Is the probability of each edge the same if it is in DARTS space? Can you tell me the principle?
  2. If it is a shape(7,15) search space, how should I set up the PA sampling rules?
  3. PA-DA reaches Ktau's SOTA, but I still can't run in the 0.713 range, it's roughly around 0.69, and I don't know where I'm not setting the parameters correctly.

Sincerely look forward to your guidance, best wishes to you.

Respected researchers

I am very interested in your research. Could you please share the code related to the darts search space to better reproduce the outcomes in the paper?

How were the 64 architectures selected, randomly or carefully?

Dear researcher, I have a few small doubts to ask you.

  1. How to choose the 64 architectures, are they random or carefully selected? Were they selected according to a certain ranking arrangement?

  2. about the measurement of ktau coefficient, why the Fig. 1(a) is 0.4, while the paper says it is 0.713. Any evaluation rules you can share?

Thank you very much.

nasbench201_dict.npy文件询问。

您好,请问这个nasbench201_dict.npy文件是什么作用,如果我用您的方法在新的数据集上跑,这个文件需要修改什么地方。麻烦您了。

关于计算DA阶段的梯度

作者您好,最近您的代码有部分我理解的不是很清楚;
即关于代码中求解Grad_norm的部分,您代码中是否没有使用求导数的方式得到对应的Samples权重?
with torch.no_grad():
probs = F.softmax(logits, dim=1)
one_hot_targets = F.one_hot(targets, num_classes=args.num_classes)
grad_norm = (torch.norm(probs - one_hot_targets, dim=-1).detach().cpu().numpy())
loss = loss_per_sample.mean() # mean loss to backward ?
还有一个问题就是,如果是回归问题,这里应该如何进行修正呢?
我在IS有看见您的Issue,请问您清楚如何使用pytorh转换您询问的该代码吗?

最近拜读您的代码以及IS代码有这样的一些困惑,期待您的解答!
谢谢!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.