Giter Site home page Giter Site logo

Why DeepMIM work? about deepmim HOT 3 OPEN

jsrdcht avatar jsrdcht commented on August 19, 2024 1
Why DeepMIM work?

from deepmim.

Comments (3)

raymin0223 avatar raymin0223 commented on August 19, 2024 1

Hi, thanks for the author’s great work!

From my understanding, DeepMIM regularizes the intermediate features to contain information for reconstructing original images, like the last feature.
This leads to several benefits, including 1) regularization effects, 2) addressing vanishing gradients, or 3) target-awared and aligned information in intermediate features.
i.e., the mutual information between intermediate and last features may be expected to increase, given that both have the same targets.

I guess the analysis in our paper (Self-Contrastive Learning) could be applied similarly to this work.
As our loss function makes intermediate layers ($T$) to output similar representations to the last layer ($F$) via supervised contrastive loss, we have proved some theoretical guarantees regarding the MI between them.

In the unsupervised case, we demonstrated that self-contrasting guarantees the lower bound for $I(T, F)$, and the training was effective while freezing $F$ to prevent it from following $T$.
Even though they set the targets as original images, rather than the last representations (i.e., reconstructed patches from the last layer), I hope that our work could provide some value for the authors. :)

from deepmim.

jsrdcht avatar jsrdcht commented on August 19, 2024

I guess the analysis in our paper (Self-Contrastive Learning) could be applied similarly to this work. As our loss function makes intermediate layers (T) to output similar representations to the last layer (F) via supervised contrastive loss, we have proved some theoretical guarantees regarding the MI between them.

In the unsupervised case, we demonstrated that self-contrasting guarantees the lower bound for I(T,F), and the training was effective while freezing F to prevent it from following T. Even though they set the targets as original images, rather than the last representations (i.e., reconstructed patches from the last layer), I hope that our work could provide some value for the authors. :)

I tend to think that this technology is "correct nonsense". Since ViT maintains feature resolution, the mutual information between high and low layers can be well preserved. Why do we still need additional objectives to help? The reconstruction objective itself requires maximizing mutual information to complete the task.

from deepmim.

raymin0223 avatar raymin0223 commented on August 19, 2024

I guess the analysis in our paper (Self-Contrastive Learning) could be applied similarly to this work. As our loss function makes intermediate layers (T) to output similar representations to the last layer (F) via supervised contrastive loss, we have proved some theoretical guarantees regarding the MI between them.
In the unsupervised case, we demonstrated that self-contrasting guarantees the lower bound for I(T,F), and the training was effective while freezing F to prevent it from following T. Even though they set the targets as original images, rather than the last representations (i.e., reconstructed patches from the last layer), I hope that our work could provide some value for the authors. :)

I tend to think that this technology is "correct nonsense". Since ViT maintains feature resolution, the mutual information between high and low layers can be well preserved. Why do we still need additional objectives to help? The reconstruction objective itself requires maximizing mutual information to complete the task.

In my opinion, large models are capable of learning more semantic information in general, as the accuracy tends to improve with the addition of more transformer blocks.
This suggests that the learned information from earlier or deeper layers differs from one another similar to previous findings on convolution networks.
Thus, I think that DeepMIM can exploit information from various depths of networks via auxiliary MIM losses on earlier layers.

from deepmim.

Related Issues (5)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.