Joluck / mod-rwkvLinks
The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging different encoders, the project allows for flexible modality switching and aspires to achieve end-to-end cross-modal inference.
☆66Updated last month
Alternatives and similar repositories for mod-rwkv
Users that are interested in mod-rwkv are comparing it to the libraries listed below
Sorting:
- ☆171Updated last month
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆48Updated 5 months ago
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆56Updated last month
- ☆17Updated last year
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆62Updated 4 months ago
- ☆41Updated 9 months ago
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆244Updated last month
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Updated last month
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆132Updated last year
- RAG SYSTEM FOR RWKV☆52Updated last year
- Efficient RWKV inference engine. RWKV7 7.2B fp16 decoding 10250 tps @ single 5090.☆78Updated last week
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆30Updated 2 weeks ago
- ☆23Updated last year
- A fast RWKV Tokenizer written in Rust☆54Updated 6 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆59Updated 10 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆154Updated last month
- This repo is an exploratory experiment to enable frozen pretrained RWKV language models to accept speech modality input. We followed the …☆54Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 9 months ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆47Updated 3 months ago
- State tuning tunes the state☆35Updated last year
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆43Updated last year
- ☆34Updated last year
- 😊 TPTT: Transforming Pretrained Transformers into Titans☆57Updated 2 months ago
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated last year
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆40Updated last year
- This project is established for real-time training of the RWKV model.☆50Updated last year
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆108Updated 8 months ago
- PyTorch implementation of Titans.☆31Updated last year
- RWKV-7: Surpassing GPT☆104Updated last year
- ROSA-Tuning☆65Updated last week