JL-er / RWKV-PEFTLinks
☆147Updated last month
Alternatives and similar repositories for RWKV-PEFT
Users that are interested in RWKV-PEFT are comparing it to the libraries listed below
Sorting:
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- This project is established for real-time training of the RWKV model.☆49Updated last year
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆54Updated 2 weeks ago
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆36Updated 8 months ago
- RAG SYSTEM FOR RWKV☆51Updated 10 months ago
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆130Updated last year
- ☆81Updated last year
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆45Updated last month
- Evaluating LLMs with Dynamic Data☆95Updated 2 months ago
- ☆13Updated 9 months ago
- RWKV in nanoGPT style☆192Updated last year
- ☆17Updated 9 months ago
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆45Updated 2 months ago
- State tuning tunes the state☆35Updated 7 months ago
- rwkv finetuning☆37Updated last year
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆233Updated 4 months ago
- This project is to extend RWKV LM's capabilities including sequence classification/embedding/peft/cross encoder/bi encoder/multi modaliti…☆10Updated last year
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆44Updated 3 weeks ago
- The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging diff…☆58Updated last week
- ☆41Updated last year
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- RWKV, in easy to read code☆72Updated 6 months ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆411Updated 2 years ago
- RWKV-7: Surpassing GPT☆96Updated 10 months ago
- The all-in-one RWKV runtime box with embed, RAG, AI agents, and more.☆582Updated 3 weeks ago
- ☆38Updated 5 months ago
- Fine-tuning RWKV-World model☆26Updated 2 years ago
- RWKV-7 mini☆11Updated 6 months ago
- ☆34Updated last year