Giter Site home page Giter Site logo

Positional Encoding about mamba HOT 14 OPEN

apapiu avatar apapiu commented on July 17, 2024 9
Positional Encoding

from mamba.

Comments (14)

albertfgu avatar albertfgu commented on July 17, 2024 4

We don't use positional encodings. That's a very interesting observation! What sort of positional encodings did you use?

from mamba.

sukjunhwang avatar sukjunhwang commented on July 17, 2024 3

If using flattened images (say CxHxW -> CxHW), then I believe the positional encodings can act as an indicator for new rows, especially for fixed resolution settings. It'd be also interesting to see if this phenomenon also happens when using non-static resolution datasets.

from mamba.

EricLina avatar EricLina commented on July 17, 2024 3

I've tried the learnable PE and add it to the input whose shape is [seq_len, 1024].
I found that PE is harmful to mamba.🤔

from mamba.

apapiu avatar apapiu commented on July 17, 2024 2

@albertfgu I just used learned positional encodings - the sequence length was 64 (4 by 4 patches for a 32 by 32 image). I will try to reproduce the results in a colab and share here.

from mamba.

hrbigelow avatar hrbigelow commented on July 17, 2024

@apapiu I also am curious about this. See this issue. While I agree with Tri's point 2 that SSMs have 'implicit' positional information due to their recurrent nature, I also feel that injecting explicit positional information may allow the model to find position-dependent patterns more easily.

from mamba.

radarFudan avatar radarFudan commented on July 17, 2024

In theory, any relative positional encoding can be realized(approximated) by SSMs. But the manual adding of certain positional information might still be helpful... (In general recurrent weights are sensitive, therefore hard to train to the optimal?)

from mamba.

drapado avatar drapado commented on July 17, 2024

Am I wrong if I assume that using the simple from mamba_ssm import Mamba module doesn't make use of the ssm_state that can be useful for recurrent connections either from image patches or from time series like videos? Any ideas on how to approach this?

from mamba.

EricLina avatar EricLina commented on July 17, 2024

Am I wrong if I assume that using the simple from mamba_ssm import Mamba module doesn't make use of the ssm_state that can be useful for recurrent connections either from image patches or from time series like videos? Any ideas on how to approach this?

You could refer to mamba_chat.🥰

from mamba.

albertfgu avatar albertfgu commented on July 17, 2024

We aren't currently supported passing the state through, but it will be supported in future versions.

from mamba.

ElliottDyson avatar ElliottDyson commented on July 17, 2024

I've tried the learnable PE and add it to the input whose shape is [seq_len, 1024].
I found that PE is harmful to mamba.🤔

Could this be specific to how learnable positional encodings are interacting with the Mamba architecture specifically though? Given that another individual said they used implicit positional encodings and achieved a benefit in accuracy.

I wonder what the effect of applying a RoPE method to the embedding space would be? Given the inability to apply them to Key and Query vectors due to their inexistence.

I was just thinking it might be useful in cases where the input sample rate has the possibility of being different, such as with audio. If we are able to implement positional embedding like this then we should be able to adjust for that as we can with transformer models. I can't otherwise see how this might be possible with Mamba?

from mamba.

radarFudan avatar radarFudan commented on July 17, 2024

Am I wrong if I assume that using the simple from mamba_ssm import Mamba module doesn't make use of the ssm_state that can be useful for recurrent connections either from image patches or from time series like videos? Any ideas on how to approach this?

I have a naive solution that might not enjoy the memory saving and high speed from Mamba coda for now.

The idea is that: One can modify the reference code provided in the mamba codebase and replace the for loop with some associative scan operator. If the memory cost and step time from mamba baseline is unit, then the for loop basically takes 2x(not sure) memory and 1000x time. But if one use associative scan, you can get 2x memory and 10x times. (Therefore mamba kernel is amazing!)

Still, as I don't know how to write CUDA, wish the Santa might drop a hidden states supported version soon.

@albertfgu do you think it's necessary to create a ref_speedup version that accelerates performance for this application? This approach could satisfy those looking to tweak the existing code without CUDA feature requests.

from mamba.

radarFudan avatar radarFudan commented on July 17, 2024

I tried to speedup the ref with associative scan:
https://github.com/radarFudan/mamba/tree/ref_speedup

Faster than ref, but still 10x slower than the CUDA code. The advantage is in PyTorch code so people without CUDA skill can play with it.

Checkout the speedup graphical comparison here:
https://github.com/radarFudan/mamba/blob/ref_speedup/speedup_ref_with_associative_scan.pdf

(Blue is ref - cpu mode, yellow is speedup_ref - pytorch's associative scan, green is mamba in cuda mode)

from mamba.

josmithiii avatar josmithiii commented on July 17, 2024

It can be argued that diagonal SSMs use a variant of RoPE when $A(n,n)$ has an imaginary part.

from mamba.

Anri-Lombard avatar Anri-Lombard commented on July 17, 2024

@apapiu any update on this?

from mamba.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.