Giter Site home page Giter Site logo

Comments (5)

PotatoSpudowski avatar PotatoSpudowski commented on May 25, 2024 1

Hi,
I was able to get 30B param model working.
13B should work fine too and 65B (If someone releases it xD)

You can look at this branch
https://github.com/PotatoSpudowski/fastLLaMa/tree/alpaca-lora

You will have to follow the build steps and convert the model again.

The issue with LoRA models are their embedding size. Based on how LoRA method works (It creates low rank decomposition matrices and freezes the pertained weights), I suspect that is why we have have different embedding sizes compared to non LoRA models.

Will need to sort out a few things before merging to main but feel free to use this and let me know if you face any issues :)

from fastllama.

PotatoSpudowski avatar PotatoSpudowski commented on May 25, 2024 1

Merged to main.

Structure of fastLlama.Model() is updated. Please change accordingly!

from fastllama.

PotatoSpudowski avatar PotatoSpudowski commented on May 25, 2024

I will try and get it integrated tonight ;)

from fastllama.

robin-coac avatar robin-coac commented on May 25, 2024

Hi @PotatoSpudowski . I was curious how alpaca models are handled differently. For example, llama.cpp requires alpaca models to have n_parts and ins flags. are those things accounted for ?
My C/C++ skills are not good enough to navigate your code.

from fastllama.

PotatoSpudowski avatar PotatoSpudowski commented on May 25, 2024

Yup, That's why why require users to specify the ModelIdentifier when initialising the model.
Based on the identifier, we chose the config from the backend (Which tells us about parts, vocab size etc). It is an underrated feature of fastLLaMa which imo is the right way to go about it.

The ins flag if I am not right is supposed to specify that it is in instruction mode is it? Either ways we have example files for Alpaca and LLaMA models which show how to use these models for either text completion or QNA tasks.

Finally we also are working on redesigning our save and load feature and optimising it for latency and size in the feature/save_load branch. Extremely GOATED implementation!

Developers should be allowed to implement their own workflows using the features that were developed using first principles thinking rather than us deciding workflows for them. Will document everything extensively so it is easier for everyone!!!

from fastllama.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.