Giter Site home page Giter Site logo

faster-rwkv's Introduction

Faster RWKV

CUDA

Convert Model

  1. Generate a ChatRWKV weight file by v2/convert_model.py (in ChatRWKV repo) and strategy cuda fp16.

  2. Generate a faster-rwkv weight file by tools/convert_weight.py. For example, python3 tools/convert_weight.py RWKV-4-World-CHNtuned-1.5B-v1-20230620-ctx4096-converted-fp16.pth rwkv-4-1.5b-chntuned-fp16.fr.

Build

mkdir build
cd build
cmake -DFR_ENABLE_CUDA=ON -DCMAKE_BUILD_TYPE=Release -GNinja ..
ninja

Run

./chat tokenizer_file_path weight_file_path "cuda fp16"

For example, ./chat ../tokenizer_model ../rwkv-4-1.5b-chntuned-fp16.fr "cuda fp16"

Android

Convert Model

  1. Generate a ChatRWKV weight file by v2/convert_model.py (in ChatRWKV repo) and strategy cuda fp32 or cpu fp32. Note that though we use fp32 here, the real dtype is determined is the following step.

  2. Generate a faster-rwkv weight file by tools/convert_weight.py.

  3. Export ncnn model by ./export_ncnn <input_faster_rwkv_model_path> <output_path_prefix>. You can download pre-built export_ncnn from Releases if you are a Linux users, or build it by yourself.

Build

Android App Development

Download the pre-built Android AAR library from Releases, or run the aar/build_aar.sh to build it by yourself.

Android C++ Development

For the path of Android NDK and toolchain file, please refer to Android NDK docs.

mkdir build
cd build
cmake -DFR_ENABLE_NCNN=ON -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=android-28 -DANDROID_NDK=xxxx -DCMAKE_TOOLCHAIN_FILE=xxxx -DCMAKE_BUILD_TYPE=Release -GNinja ..
ninja

Run in Termux (Ignore it if you are an app developer)

  1. Copy chat into the Android phone (by using adb or Termux).

  2. Copy the tokenizer_model and the ncnn models (.param, .bin and .config) into the Android phone (by using adb or Termux).

  3. Run ./chat tokenizer_model ncnn_models_basename "ncnn fp16" in adb shell or Termux, for example, if the ncnn models are named rwkv-4-chntuned-1.5b.param, rwkv-4-chntuned-1.5b.bin and rwkv-4-chntuned-1.5b.config, the command should be ./chat tokenizer_model rwkv-4-chntuned-1.5b "ncnn fp16".

Requirements

  • Android System >= 9.0

  • RAM >= 4GB (for 1.5B model)

  • No hard requirement for CPU. More powerful = faster.

Android Demo

Run one of the following commands in Termux to download prebuilt executables and models automatically. The download script supports continuely downloading partially downloaded files, so feel free to Ctrl-C and restart it if the speed is too slow.

Executables, 1.5B CHNtuned int8 model, 1.5B CHNtuned int4 model and 0.1B world int8 model:

curl -L -s https://raw.githubusercontent.com/daquexian/faster-rwkv/master/download_binaries_and_models_termux.sh | bash -s 3

Executables, 1.5B CHNtuned int4 model and 0.1B world int8 model:

curl -L -s https://raw.githubusercontent.com/daquexian/faster-rwkv/master/download_binaries_and_models_termux.sh | bash -s 2

Executables and 0.1B world int8 model:

curl -L -s https://raw.githubusercontent.com/daquexian/faster-rwkv/master/download_binaries_and_models_termux.sh | bash -s 1

Executables only:

curl -L -s https://raw.githubusercontent.com/daquexian/faster-rwkv/master/download_binaries_and_models_termux.sh | bash -s 0

Export ONNX

  1. Install rwkv2onnx python package by pip install rwkv2onnx.

  2. Clone https://github.com/BlinkDL/ChatRWKV

  3. Run rwkv2onnx <input path> <output path> <ChatRWKV path>. For example, rwkv2onnx ~/RWKV-5-World-0.1B-v1-20230803-ctx4096.pth ~/RWKV-5-0.1B.onnx ~/ChatRWKV

TODO

faster-rwkv's People

Contributors

asakusarinne avatar daquexian avatar dependabot[bot] avatar nihui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

faster-rwkv's Issues

Failing to run `chat` on ARM Linux. Error: ` Error at 56 in faster-rwkv/kernels/ncnn/model_forward.cpp`

Hi,

I am trying to run the 430M model on my ARM development board. But I inference fails after I typed some word. I am able to successfully convert using ChatRWKV's conversion tool and export_ncnn.cpp

❯ ./chat ../tokenizer_model rwkv-4-430m "ncnn fp16"
User: Hello!
Assistant:terminate called after throwing an instance of 'std::runtime_error'
  what():  Error at 56 in /home/marty/Documents/faster-rwkv/kernels/ncnn/model_forward.cpp
[1]    47987 abort (core dumped)  ./chat ../tokenizer_model rwkv-4-430m "ncnn fp16"

It seem to fail at this line.

RV_CHECK(output.c == 1 && output.d == 1 && output.h == 1);

is there a way to resolve this?

Thank you.

ONNX Inference on CPU

A guide would be fantastic, I haven't been able to run inference on the onnx exported model so far

Help doing export to Onnx?

hey, do you need any help with the export to onnx script, I'm happy to put in some time to help figure this out.

I've seen a few examples, and look like it just needs setting up with dummy inputs. There's also onnx addons, that allow embedding of the tokeniser which would be nice to figure out.

I'm on the RWKV discord if you fancy a more direct chat about it.

编译错误:error: more than one conversion function from "half" to a built-in type applies

  • 状况:
    cmake 完毕后进行 ninja -j8后报错,错误信息:
/root/Dev/MyPR/faster-rwkv/kernels/cuda/layer_norm.cu(68): error: more than one conversion function from "half" to a built-in type applies:
            function "__half::operator float() const"
/usr/local/cuda/include/cuda_fp16.hpp(204): here
            function "__half::operator short() const"
/usr/local/cuda/include/cuda_fp16.hpp(222): here
            function "__half::operator unsigned short() const"
/usr/local/cuda/include/cuda_fp16.hpp(225): here
            function "__half::operator int() const"
/usr/local/cuda/include/cuda_fp16.hpp(228): here
            function "__half::operator unsigned int() const"
/usr/local/cuda/include/cuda_fp16.hpp(231): here
            function "__half::operator long long() const"
/usr/local/cuda/include/cuda_fp16.hpp(234): here
            function "__half::operator unsigned long long() const"
/usr/local/cuda/include/cuda_fp16.hpp(237): here
            function "__half::operator __nv_bool() const"
/usr/local/cuda/include/cuda_fp16.hpp(241): here
          detected during:
            instantiation of "void rwkv::cuda::layer_norm::<unnamed>::AffineStore<SRC, DST, do_scale, do_center>::store<N>(const SRC *, int64_t, int64_t) [with SRC=float, DST=half, do_scale=true, do_center=true, N=4]" 
/root/Dev/MyPR/faster-rwkv/kernels/cuda/layer_norm.cuh(645): here
            instantiation of "void rwkv::cuda::layer_norm::LayerNormBlockSMemImpl<LOAD,STORE,ComputeType,pack_size,block_size>(LOAD, STORE, int64_t, int64_t, double, ComputeType *, ComputeType *) [with LOAD=rwkv::cuda::layer_norm::DirectLoad<half, half>, STORE=rwkv::cuda::layer_norm::<unnamed>::AffineStore<float, half, true, true>, ComputeType=float, pack_size=4, block_size=128]" 
/root/Dev/MyPR/faster-rwkv/kernels/cuda/layer_norm.cuh(725): here
            instantiation of "cudaError_t rwkv::cuda::layer_norm::TryDispatchLayerNormBlockSMemImplBlockSize<LOAD,STORE,ComputeType,pack_size>(cudaStream_t, LOAD, STORE, int64_t, int64_t, double, ComputeType *, ComputeType *, __nv_bool *) [with LOAD=rwkv::cuda::layer_norm::DirectLoad<half, half>, STORE=rwkv::cuda::layer_norm::<unnamed>::AffineStore<float, half, true, true>, ComputeType=float, pack_size=4]" 
/root/Dev/MyPR/faster-rwkv/kernels/cuda/layer_norm.cuh(847): here
            instantiation of "cudaError_t rwkv::cuda::layer_norm::TryDispatchLayerNormBlockSMemImplPackSize<LOAD, STORE, ComputeType>::operator()(cudaStream_t, LOAD, STORE, int64_t, int64_t, double, ComputeType *, ComputeType *, __nv_bool *) [with LOAD=rwkv::cuda::layer_norm::DirectLoad<half, half>, STORE=rwkv::cuda::layer_norm::<unnamed>::AffineStore<float, half, true, true>, ComputeType=float]" 
/root/Dev/MyPR/faster-rwkv/kernels/cuda/layer_norm.cuh(870): here
            instantiation of "cudaError_t rwkv::cuda::layer_norm::TryDispatchLayerNormBlockSMemImpl(cudaStream_t, LOAD, STORE, int64_t, int64_t, double, ComputeType *, ComputeType *, __nv_bool *) [with LOAD=rwkv::cuda::layer_norm::DirectLoad<half, half>, STORE=rwkv::cuda::layer_norm::<unnamed>::AffineStore<float, half, true, true>, ComputeType=float]" 
/root/Dev/MyPR/faster-rwkv/kernels/cuda/layer_norm.cuh(994): here
            instantiation of "std::enable_if<<expression>, cudaError_t>::type rwkv::cuda::layer_norm::DispatchLayerNorm(cudaStream_t, LOAD, STORE, int64_t, int64_t, double, ComputeType *, ComputeType *) [with LOAD=rwkv::cuda::layer_norm::DirectLoad<half, half>, STORE=rwkv::cuda::layer_norm::<unnamed>::AffineStore<float, half, true, true>, ComputeType=float]" 
(96): here
            instantiation of "void rwkv::cuda::layer_norm::LayerNormForwardGpu(int64_t, int64_t, double, const T *, const T *, const T *, T *) [with T=half]" 
(120): here

6 errors detected in the compilation of "/root/Dev/MyPR/faster-rwkv/kernels/cuda/layer_norm.cu".
[118/286] Building CUDA object CMakeFiles/faster_rwkv_internal.dir/kernels/cuda/cat.cu.o
ninja: build stopped: subcommand failed.
  • OS: WSL2(Ubuntu 20.04)
  • cmake version: 3.27.0
  • cmake 命令: cmake -DFR_ENABLE_CUDA=ON -DCMAKE_BUILD_TYPE=Release -GNinja ..

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.