zhiyuan1i / TorchRWKVLinks
RWKV6 in native pytorch and triton:)
☆11Updated last year
Alternatives and similar repositories for TorchRWKV
Users that are interested in TorchRWKV are comparing it to the libraries listed below
Sorting:
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Updated 3 weeks ago
- Implementation of 2-simplicial attention proposed by Clift et al. (2019) and the recent attempt to make practical in Fast and Simplex, Ro…☆46Updated 4 months ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆46Updated 2 months ago
- Here we will test various linear attention designs.☆62Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- ☆32Updated 2 years ago
- ☆27Updated 5 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 8 months ago
- Official Code Repository for the paper "Key-value memory in the brain"☆31Updated 10 months ago
- ☆32Updated last year
- ☆24Updated last year
- Griffin MQA + Hawk Linear RNN Hybrid☆89Updated last year
- Implementation of Strassen attention, from Kozachinskiy et al. of National Center of AI in Chile☆41Updated 6 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Updated last year
- Course Project for COMP4471 on RWKV☆17Updated last year
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆53Updated 5 months ago
- Some preliminary explorations of Mamba's context scaling.☆13Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Updated 2 years ago
- ☆29Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- RWKV, in easy to read code☆72Updated 9 months ago
- RWKV-7 mini☆11Updated 9 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆46Updated 4 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- State tuning tunes the state☆35Updated 10 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated last year
- RWKV-7: Surpassing GPT☆103Updated last year