Comments (14)
We don't use positional encodings. That's a very interesting observation! What sort of positional encodings did you use?
from mamba.
If using flattened images (say CxHxW -> CxHW), then I believe the positional encodings can act as an indicator for new rows, especially for fixed resolution settings. It'd be also interesting to see if this phenomenon also happens when using non-static resolution datasets.
from mamba.
I've tried the learnable PE and add it to the input whose shape is [seq_len, 1024].
I found that PE is harmful to mamba.🤔
from mamba.
@albertfgu I just used learned positional encodings - the sequence length was 64 (4 by 4 patches for a 32 by 32 image). I will try to reproduce the results in a colab and share here.
from mamba.
@apapiu I also am curious about this. See this issue. While I agree with Tri's point 2 that SSMs have 'implicit' positional information due to their recurrent nature, I also feel that injecting explicit positional information may allow the model to find position-dependent patterns more easily.
from mamba.
In theory, any relative positional encoding can be realized(approximated) by SSMs. But the manual adding of certain positional information might still be helpful... (In general recurrent weights are sensitive, therefore hard to train to the optimal?)
from mamba.
Am I wrong if I assume that using the simple from mamba_ssm import Mamba
module doesn't make use of the ssm_state
that can be useful for recurrent connections either from image patches or from time series like videos? Any ideas on how to approach this?
from mamba.
Am I wrong if I assume that using the simple
from mamba_ssm import Mamba
module doesn't make use of thessm_state
that can be useful for recurrent connections either from image patches or from time series like videos? Any ideas on how to approach this?
You could refer to mamba_chat.🥰
from mamba.
We aren't currently supported passing the state through, but it will be supported in future versions.
from mamba.
I've tried the learnable PE and add it to the input whose shape is [seq_len, 1024].
I found that PE is harmful to mamba.🤔
Could this be specific to how learnable positional encodings are interacting with the Mamba architecture specifically though? Given that another individual said they used implicit positional encodings and achieved a benefit in accuracy.
I wonder what the effect of applying a RoPE method to the embedding space would be? Given the inability to apply them to Key and Query vectors due to their inexistence.
I was just thinking it might be useful in cases where the input sample rate has the possibility of being different, such as with audio. If we are able to implement positional embedding like this then we should be able to adjust for that as we can with transformer models. I can't otherwise see how this might be possible with Mamba?
from mamba.
Am I wrong if I assume that using the simple
from mamba_ssm import Mamba
module doesn't make use of thessm_state
that can be useful for recurrent connections either from image patches or from time series like videos? Any ideas on how to approach this?
I have a naive solution that might not enjoy the memory saving and high speed from Mamba coda for now.
The idea is that: One can modify the reference code provided in the mamba codebase and replace the for loop with some associative scan operator. If the memory cost and step time from mamba baseline is unit, then the for loop basically takes 2x(not sure) memory and 1000x time. But if one use associative scan, you can get 2x memory and 10x times. (Therefore mamba kernel is amazing!)
Still, as I don't know how to write CUDA, wish the Santa might drop a hidden states supported version soon.
@albertfgu do you think it's necessary to create a ref_speedup version that accelerates performance for this application? This approach could satisfy those looking to tweak the existing code without CUDA feature requests.
from mamba.
I tried to speedup the ref with associative scan:
https://github.com/radarFudan/mamba/tree/ref_speedup
Faster than ref, but still 10x slower than the CUDA code. The advantage is in PyTorch code so people without CUDA skill can play with it.
Checkout the speedup graphical comparison here:
https://github.com/radarFudan/mamba/blob/ref_speedup/speedup_ref_with_associative_scan.pdf
(Blue is ref - cpu mode, yellow is speedup_ref - pytorch's associative scan, green is mamba in cuda mode)
from mamba.
It can be argued that diagonal SSMs use a variant of RoPE when
from mamba.
@apapiu any update on this?
from mamba.
Related Issues (20)
- Questions about Chunk_size using Triton optimization in SSD kernel HOT 2
- When I run mamba2 : ImportError: libcudart.so.11.0: cannot open shared object file: No such file or directory
- Possible bug when running evaluation with self.use_mem_eff_path=False
- Typo of dconv at Line 231 of modules/mamba2.py HOT 1
- How to load mamba1's weight to mamba2 ? HOT 1
- Small datasets HOT 4
- Help with _chunk_state_fwd. HOT 1
- Assertion error in ssd_minimal HOT 5
- Questions regarding pretrained Mamba2-Attention Hybrid Model HOT 2
- (about the paper) In the Section5.1, I have a question: Why M matrix, whose element is also matrix, can finally be (T, T) size? HOT 2
- A mamba scaling problem given the perplexity score curves shown in the TTT paper HOT 2
- Passing an initial_conv_state in mamba_split_conv1d_scan_combined? HOT 2
- Self-distillation technique
- Question for 'self.use_mem_eff_path and inference_params'
- triton.runtime.autotuner.OutOfResources: out of resource: shared memory, Required: 254208, Hardware limit: 101376. HOT 2
- I want to ask does anyone know how to solve this problem
- /anaconda3/lib/python3.11/site-packages/causal_conv1d_cuda.cpython-311-x86_64-linux-gnu.so: undefined symbol: _ZN3c107WarningC1ENS_7variantIJNS0_11UserWarningENS0_18DeprecationWarningEEEERKNS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEb HOT 1
- Mamba-2 Error: `'NoneType' object has no attribute 'causal_conv1d_fwd'` HOT 2
- Used selective_scan_cuda and causal_conv1d_cuda, but still very slow to train
- mamba / self-attention hybrid generation
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mamba.