yuunnn-w / RWKV_PytorchLinks
This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation is overly complex and lacks extensibility. Let's join the flexible PyTorch ecosystem and open-source it together!
☆130Updated last year
Alternatives and similar repositories for RWKV_Pytorch
Users that are interested in RWKV_Pytorch are comparing it to the libraries listed below
Sorting:
- ☆147Updated last month
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆45Updated last month
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆233Updated 4 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆36Updated 8 months ago
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆54Updated 2 weeks ago
- rwkv finetuning☆37Updated last year
- RAG SYSTEM FOR RWKV☆51Updated 10 months ago
- Evaluating LLMs with Dynamic Data☆95Updated 2 months ago
- This project is established for real-time training of the RWKV model.☆49Updated last year
- ☆13Updated 9 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆327Updated 7 months ago
- Low-bit optimizers for PyTorch☆131Updated last year
- ☆17Updated 9 months ago
- This project is to extend RWKV LM's capabilities including sequence classification/embedding/peft/cross encoder/bi encoder/multi modaliti…☆10Updated last year
- ☆22Updated 9 months ago
- RWKV in nanoGPT style☆192Updated last year
- The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging diff…☆58Updated last week
- continous batching and parallel acceleration for RWKV6☆23Updated last year
- ☆81Updated last year
- State tuning tunes the state☆35Updated 7 months ago
- Inference RWKV with multiple supported backends.☆60Updated last week
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆45Updated 2 months ago
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆83Updated last week
- A quantization algorithm for LLM☆143Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆109Updated 2 years ago
- The homepage of OneBit model quantization framework.☆192Updated 8 months ago
- ☆125Updated last year
- ☆60Updated last year
- Mixture-of-Experts (MoE) Language Model☆189Updated last year