JL-er / WorldRWKVLinks
The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging different encoders, the project allows for flexible modality switching and aspires to achieve end-to-end cross-modal inference.
☆52Updated last week
Alternatives and similar repositories for WorldRWKV
Users that are interested in WorldRWKV are comparing it to the libraries listed below
Sorting:
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆38Updated this week
- ☆134Updated 3 weeks ago
- State tuning tunes the state☆34Updated 5 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆137Updated last month
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆231Updated last month
- ☆18Updated 6 months ago
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆47Updated 2 weeks ago
- ☆37Updated 2 months ago
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆35Updated this week
- This project is to extend RWKV LM's capabilities including sequence classification/embedding/peft/cross encoder/bi encoder/multi modaliti …☆10Updated 11 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆42Updated this week
- ☆34Updated 11 months ago
- A large-scale RWKV v6, v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to de…☆38Updated last week
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆34Updated last year
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆129Updated 11 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆49Updated 3 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 2 months ago
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆20Updated 3 weeks ago
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 7 months ago
- Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"☆20Updated last month
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆148Updated 11 months ago
- ☆23Updated 6 months ago
- BlackGoose Rimer: RWKV as a Superior Architecture for Large-Scale Time Series Modeling☆24Updated 3 weeks ago
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆41Updated 5 months ago
- rwkv finetuning☆36Updated last year
- A collection of tricks and tools to speed up transformer models☆170Updated last month
- RAG SYSTEM FOR RWKV☆50Updated 7 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆101Updated last year
- Here we will test various linear attention designs.☆60Updated last year
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆75Updated last month