howard-hou / RWKV-X
RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's long sequence processing capabilities.
☆20Updated this week
Alternatives and similar repositories for RWKV-X:
Users that are interested in RWKV-X are comparing it to the libraries listed below
- ☆17Updated this week
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆15Updated last month
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- continous batching and parallel acceleration for RWKV6☆24Updated 10 months ago
- GoldFinch and other hybrid transformer components☆45Updated 9 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆27Updated this week
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆28Updated this week
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆17Updated 6 months ago
- ☆53Updated 9 months ago
- Official implementation of "The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs"☆22Updated last week
- A large-scale RWKV v6, v7(World, ARWKV, PRWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy o…☆35Updated this week
- ☆32Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 8 months ago
- ☆34Updated this week
- Here we will test various linear attention designs.☆60Updated last year
- ☆47Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 5 months ago
- Combining SOAP and MUON☆16Updated 2 months ago
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Updated last year
- Triton Implementation of HyperAttention Algorithm☆47Updated last year
- ☆30Updated 11 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 8 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆97Updated 3 weeks ago
- Official code for the paper "Attention as a Hypernetwork"☆30Updated 10 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated last year
- ☆31Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆35Updated 2 months ago
- Scaling Sparse Fine-Tuning to Large Language Models☆15Updated last year
- Awesome Triton Resources☆27Updated last week
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated 10 months ago