yuunnn-w / RWKV_PytorchLinks
This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation is overly complex and lacks extensibility. Let's join the flexible PyTorch ecosystem and open-source it together!
☆132Updated last year
Alternatives and similar repositories for RWKV_Pytorch
Users that are interested in RWKV_Pytorch are comparing it to the libraries listed below
Sorting:
- ☆171Updated 3 weeks ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆47Updated 5 months ago
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆243Updated 3 weeks ago
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆62Updated 4 months ago
- The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging diff…☆66Updated last month
- RAG SYSTEM FOR RWKV☆52Updated last year
- Evaluating LLMs with Dynamic Data☆111Updated 3 weeks ago
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆56Updated last month
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆35Updated last year
- This project is established for real-time training of the RWKV model.☆50Updated last year
- ☆13Updated last year
- rwkv finetuning☆37Updated last year
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆340Updated 11 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- A quantization algorithm for LLM☆148Updated last year
- Get down and dirty with FlashAttention2.0 in pytorch, plug in and play no complex CUDA kernels☆112Updated 2 years ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆47Updated 3 months ago
- continous batching and parallel acceleration for RWKV6☆22Updated last year
- Inference RWKV with multiple supported backends.☆77Updated last week
- ROSA-Tuning☆65Updated last week
- ☆68Updated last year
- Inference RWKV v5, v6 and v7 with Qualcomm AI Engine Direct SDK☆90Updated last week
- State tuning tunes the state☆35Updated last year
- RWKV in nanoGPT style☆197Updated last year
- This repo is an exploratory experiment to enable frozen pretrained RWKV language models to accept speech modality input. We followed the …☆54Updated last year
- ☆125Updated 2 years ago
- ☆23Updated last year
- ☆17Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆30Updated 2 weeks ago
- RWKV-7 mini☆12Updated 10 months ago