Joluck / mod-rwkvLinks
The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging different encoders, the project allows for flexible modality switching and aspires to achieve end-to-end cross-modal inference.
☆61Updated this week
Alternatives and similar repositories for mod-rwkv
Users that are interested in mod-rwkv are comparing it to the libraries listed below
Sorting:
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆51Updated last month
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆46Updated 3 months ago
- ☆17Updated 11 months ago
- ☆39Updated 7 months ago
- ☆159Updated last month
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆56Updated 3 months ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆46Updated last month
- ☆23Updated 11 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆52Updated 5 months ago
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆237Updated 6 months ago
- Efficient RWKV inference engine. RWKV7 7.2B fp16 decoding 10250 tps @ single 5090.☆70Updated last week
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆25Updated last month
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆43Updated 10 months ago
- 😊 TPTT: Transforming Pretrained Transformers into Titans☆43Updated 3 weeks ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆144Updated last week
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆104Updated 6 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆58Updated 9 months ago
- State tuning tunes the state☆35Updated 10 months ago
- ☆34Updated last year
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆132Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 7 months ago
- BlackGoose Rimer: RWKV as a Superior Architecture for Large-Scale Time Series Modeling☆29Updated 5 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆37Updated last year
- This repo is an exploratory experiment to enable frozen pretrained RWKV language models to accept speech modality input. We followed the …☆54Updated 11 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- RAG SYSTEM FOR RWKV☆51Updated last year
- Fast modular code to create and train cutting edge LLMs☆68Updated last year
- RWKV-7: Surpassing GPT☆101Updated last year
- Official Repo for Error-Free Linear Attention is a Free Lunch: Exact Solution from Continuous-Time Dynamics☆50Updated this week