Giter Site home page Giter Site logo

Weights in the Attention Layer? about sampn HOT 7 CLOSED

tbwxmu avatar tbwxmu commented on August 26, 2024
Weights in the Attention Layer?

from sampn.

Comments (7)

JanoschMenke avatar JanoschMenke commented on August 26, 2024

No comment?

from sampn.

whymauri avatar whymauri commented on August 26, 2024

I believe W_a and W_b are W_att and E_g, respectively, in the paper. See equations (4) and (5).

https://jcheminf.biomedcentral.com/articles/10.1186/s13321-020-0414-z

from sampn.

JanoschMenke avatar JanoschMenke commented on August 26, 2024

No in the Code 'att_w' refers to the W_att. and 'att_hidden' refers to E_g. The two weight matrices 'W_a' and 'W_b' are not referenced in the Paper. Maybe because it is assumed that people know that Weights are used in attention, but I feel like this should not be assumed in a Cheminformatics journal.

from sampn.

whymauri avatar whymauri commented on August 26, 2024

Sorry, I got had by poor naming convention in the paper.

In the code, the W_x convention generally means the network weight parameters as a Torch layer. But in the paper, W_att is actually not neural network weight parameters, also called a weight matrix, but the "attention score matrix" instead. att_hiddens is certainly E_G, this is true (although it is just a vector).

So this hasn't enlightened much - why have W_a and W_b? W_a is clear to me: it makes the attention differentiable and therefore "learnable" via the W_a layer weight parameters.

Why must E_g pass through W_b? I don't quite know, but we know what it does:

  1. Bounds all "attention weighted hidden vector," which are really just entries of a vector, above zero.

  2. Adds regularization via dropout.

  3. Makes the attention vector learnable before concatenating with the molecular graph's latent representation.

Why (1-3) are desirable, especially 3, I'm unsure. However, I'm fairly certain W_a is necessary. I agree that the paper should have clarified that soft attention as leveraged in the paper is differentiable and learnable.

Cheers.

from sampn.

JanoschMenke avatar JanoschMenke commented on August 26, 2024

I think W_b just add another transformation but the question if its required.
What I also do not understand is that line
mol_vec = (cur_hiddens + att_hiddens).

Here att_hiddens= torch.matmul(att_w, cur_hiddens)
So the Attention weights are already applied to the activation. Why do I add the activation again to the attention transformed actviations.

It looks kind of like a skip-connection. I think its not intuitively understanable tho, why a skip connection is needed.
Why scale teh activation based on attention and then just add the unscaled attention again to it.

from sampn.

tbwxmu avatar tbwxmu commented on August 26, 2024

Sorry for the late reply. I am busy with my graduation. I have read all your guy's comments. There is no conflict between our paper and source codes. Our paper is just a brief description of the codes. So, please refer to the source codes when you feel conflicted. The code here is optimized for our prediction tasks based on my experience and intuition. If you read our codes carefully, you should note we also use the skip-connection in the message passing steps. Of course, you can change or choose different attention algorithms as there are several variants published by others.

from sampn.

JanoschMenke avatar JanoschMenke commented on August 26, 2024

Hi Thank for the clarification.

Good luck with your graduation.

from sampn.

Related Issues (2)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.