Joluck / WorldRWKVLinks
The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging different encoders, the project allows for flexible modality switching and aspires to achieve end-to-end cross-modal inference.
☆58Updated this week
Alternatives and similar repositories for WorldRWKV
Users that are interested in WorldRWKV are comparing it to the libraries listed below
Sorting:
- RWKV-LM-V7(https://github.com/BlinkDL/RWKV-LM) Under Lightning Framework☆45Updated 2 months ago
- ☆148Updated last month
- ☆17Updated 9 months ago
- ☆38Updated 5 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆45Updated last month
- State tuning tunes the state☆35Updated 8 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆50Updated 2 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆142Updated 4 months ago
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆54Updated 3 weeks ago
- VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.☆234Updated 4 months ago
- This project is to extend RWKV LM's capabilities including sequence classification/embedding/peft/cross encoder/bi encoder/multi modaliti…☆10Updated last year
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆44Updated last month
- A specialized RWKV-7 model for Othello(a.k.a. Reversi) that predicts legal moves, evaluates positions, and performs in-context search. It…☆42Updated 8 months ago
- ☆34Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆24Updated last month
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 5 months ago
- 😊 TPTT: Transforming Pretrained Transformers into Titans☆29Updated 2 weeks ago
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆99Updated 4 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆37Updated last year
- ☆22Updated 9 months ago
- RWKV-7: Surpassing GPT☆97Updated 10 months ago
- A collection of tricks and tools to speed up transformer models☆182Updated last week
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 6 months ago
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆130Updated last year
- RWKV infctx trainer, for training arbitary context sizes, to 10k and beyond!☆147Updated last year
- Pytorch implementation of https://arxiv.org/html/2404.07143v1☆21Updated last year
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 10 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆88Updated 11 months ago
- BlackGoose Rimer: RWKV as a Superior Architecture for Large-Scale Time Series Modeling☆28Updated 3 months ago
- A fast RWKV Tokenizer written in Rust☆53Updated 2 months ago