Giter Site home page Giter Site logo

Comments (12)

gaopeng-eugene avatar gaopeng-eugene commented on June 8, 2024 2

In the ICCV paper, the author only proposed alternating optimisation trained with resilient SGD. Joint optimization is definitely not the right solution. If they author proposed joint optimization in the ICCV paper, the contribution is minor and will not be ICCV best paper.

from fully-differentiable-deep-ndf-tf.

chrischoy avatar chrischoy commented on June 8, 2024 1

Impeccable reasoning!
Joint optimization is wrong because, otherwise, they wouldn't have won the award!

from fully-differentiable-deep-ndf-tf.

chrischoy avatar chrischoy commented on June 8, 2024 1

Okay, I'm not working on this project. Why don't you implement their method and compare? You just have to remove the softmax variables from the optimization and optimize them separately using the update rule they proposed. It wouldn't be that difficult and this way, you can actually contribute to the society. (:

from fully-differentiable-deep-ndf-tf.

gaopeng-eugene avatar gaopeng-eugene commented on June 8, 2024

In the iccv paper supplement, they prove alternating optimization for pi is convex and guarantee converge to a unique optimal by several iterations. The info Gan is very similiar with random forest and decision tree. However, when you use joint optimization nothing exist. Unless you can give a prove. Please go to authorโ€™s homepage and read the provement.

from fully-differentiable-deep-ndf-tf.

gaopeng-eugene avatar gaopeng-eugene commented on June 8, 2024

Another reason I found this paper is interesting is dynamic routing in capsule net and follow-up paper EM optimization. It seems that this two papers have some link.

from fully-differentiable-deep-ndf-tf.

gaopeng-eugene avatar gaopeng-eugene commented on June 8, 2024

Another to prove joint optimization is wrong is when you visulize the distribution in the leaf. There is no patterns because itโ€™s only a feature representation which do not contain any semantic meaning. However, when you train by EM. Same class will follow similiar routing path. The leaf distribution is more interpretable just because past experience show decision tree leaf is very easy to understand.

from fully-differentiable-deep-ndf-tf.

gaopeng-eugene avatar gaopeng-eugene commented on June 8, 2024

http://www.dsi.unive.it/~srotabul/files/publications/CVPR2014a_supp.pdf

from fully-differentiable-deep-ndf-tf.

chrischoy avatar chrischoy commented on June 8, 2024

Seems like you just read the analysis but yet understood what are the disadvantages of EM and why it requires convexity analysis.

First, EM algorithm, or alternating optimization in general, suffers from slow convergence. This is because the other variables that are not optimized slow down the convergence of the other variable.

Second, alternating optimization is not guaranteed to converge to the optimum. This is why more analysis is required for alternating optimization.

I explained the same thing in the readme. So please read before you make random comments that do not make much sense.

from fully-differentiable-deep-ndf-tf.

gaopeng-eugene avatar gaopeng-eugene commented on June 8, 2024

Thank you so much for your answer. Have you ever compared joint optimisation with alternating optimisation ? If joint optimisation is the right answer, why the author chose the much more complex and difficult to implement alternating optimisation?

from fully-differentiable-deep-ndf-tf.

gaopeng-eugene avatar gaopeng-eugene commented on June 8, 2024

The author respond me just now saying that joint optimization is OK. But they only report EM method because its more effective.

from fully-differentiable-deep-ndf-tf.

gaopeng-eugene avatar gaopeng-eugene commented on June 8, 2024

Sure, I will test it on ROI region with joint optimisation. Again, big thanks for the implementation. Looking forward to see your interesting work. 2D-R2N2 is very good ๐Ÿ˜Š

from fully-differentiable-deep-ndf-tf.

gaopeng-eugene avatar gaopeng-eugene commented on June 8, 2024

Do you know any paper proposing boost differentiable decision forests? Basically, the newly added tree will focus on misclassified sample ?

from fully-differentiable-deep-ndf-tf.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.