Giter Site home page Giter Site logo

kyegomez / mambatransformer Goto Github PK

View Code? Open in Web Editor NEW
124.0 4.0 11.0 2.31 MB

Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling

Home Page: https://discord.gg/GYbXvDGevY

License: MIT License

Shell 13.04% Python 86.96%
ai artificial-intelligence attention-is-all-you-need attention-mechanisms gpt4 language machine-learning multimodal neural-network neural-networks

mambatransformer's Introduction

Multi-Modality

Mamba Transformer

Mamba Transformer

Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling.

This is 100% novel architecture that I have designed to combine the strengths and weaknesses out of SSMs and Attention for an all-new advanced architecture with the purpose of surpassing our old limits. Faster processing speed, longer context lengths, lower perplexity over long sequences, enhanced and superior reasoning while remaining small and compact.

The architecture is essentially: x -> norm -> mamba -> norm -> transformer -> norm -> ffn -> norm -> out.

I added in many normalizations as I believe by default training stability would be severly degraded due to 2 foreign architecture's integrating with one another.

Install

pip3 install mambatransformer

Usage

import torch
from mamba_transformer import MambaTransformer

# Generate a random tensor of shape (1, 10) with values between 0 and 99
x = torch.randint(0, 100, (1, 10))

# Create an instance of the MambaTransformer model
model = MambaTransformer(
    num_tokens=100,  # Number of tokens in the input sequence
    dim=512,  # Dimension of the model
    heads=8,  # Number of attention heads
    depth=4,  # Number of transformer layers
    dim_head=64,  # Dimension of each attention head
    d_state=512,  # Dimension of the state
    dropout=0.1,  # Dropout rate
    ff_mult=4,  # Multiplier for the feed-forward layer dimension
    return_embeddings=False,  # Whether to return the embeddings,
    transformer_depth=2,  # Number of transformer blocks
    mamba_depth=10,  # Number of Mamba blocks,
    use_linear_attn=True,  # Whether to use linear attention
)

# Pass the input tensor through the model and print the output shape
out = model(x)

print(out.shape)


# After many training
model.eval()

# Would you like to train this model? Zeta Corporation offers unmatchable GPU clusters at unbeatable prices, let's partner!

# Tokenizer
model.generate(text)

License

MIT

mambatransformer's People

Contributors

dependabot[bot] avatar kyegomez avatar liberatedwinner avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

mambatransformer's Issues

[BUG] error on GPU

Thanks for sharing the awesome work.

I got the following error when running the example on GPU

python example.py
2024-01-13 21:46:44.846591: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-01-13 21:46:44.846624: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-01-13 21:46:44.848009: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-01-13 21:46:44.854341: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-01-13 21:46:45.757644: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-01-13 21:46:47,012 - numexpr.utils - INFO - Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
2024-01-13 21:46:47,013 - numexpr.utils - INFO - NumExpr defaulting to 8 threads.
Traceback (most recent call last):
  File "/home/jma/Documents/sanity_test/mamba/mamba/MambaTransformer/example.py", line 17, in <module>
    out = model(x)
  File "/home/jma/Documents/sanity_test/mamba/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/jma/Documents/sanity_test/mamba/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/jma/Documents/sanity_test/mamba/mamba/MambaTransformer/mamba_transformer/model.py", line 213, in forward
    x = mamba(x) + x
  File "/home/jma/Documents/sanity_test/mamba/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/jma/Documents/sanity_test/mamba/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/jma/Documents/sanity_test/mamba/lib/python3.9/site-packages/zeta/nn/modules/simple_mamba.py", line 118, in forward
    y = self.ssm(x)
  File "/home/jma/Documents/sanity_test/mamba/lib/python3.9/site-packages/zeta/nn/modules/simple_mamba.py", line 158, in ssm
    y = self.selective_scan(
  File "/home/jma/Documents/sanity_test/mamba/lib/python3.9/site-packages/zeta/nn/modules/simple_mamba.py", line 205, in selective_scan
    x = deltaA[:, :, i] * x + deltaB_u[:, :, i]
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

To Reproduce

import torch
from mamba_transformer import MambaTransformer, MambaTransformerblock


x = torch.randn(10, 25, 512).to("cuda:0")
model = MambaTransformerblock(
    dim=512,
    heads=8,
    depth=4,
    dim_head=64,
    d_state=512,
    dropout=0.1,
    ff_mult=4
).to("cuda:0")

# # Pass the input tensor through the model and print the output shape
out = model(x)

print(f"{x.shape=}, {out.shape=}")

[BUG] [DEMO] demo fails

Installing from source, google colab:

torch.Size([1, 10, 100])

---------------------------------------------------------------------------

AttributeError                            Traceback (most recent call last)

[<ipython-input-4-7735537b2fa4>](https://localhost:8080/#) in <cell line: 29>()
     27 
     28 # Tokenizer
---> 29 model.generate(text)
     30 

[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name)
   1693             if name in modules:
   1694                 return modules[name]
-> 1695         raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
   1696 
   1697     def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:

AttributeError: 'MambaTransformer' object has no attribute 'generate'
``
While running the example in the readme.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.