00ffcc / chunkRWKV6
continous batching and parallel acceleration for RWKV6
☆24Updated 7 months ago
Alternatives and similar repositories for chunkRWKV6:
Users that are interested in chunkRWKV6 are comparing it to the libraries listed below
- ☆30Updated 8 months ago
- 🔥 A minimal training framework for scaling FLA models☆59Updated this week
- ☆22Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆62Updated 9 months ago
- Here we will test various linear attention designs.☆58Updated 9 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆40Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆64Updated 3 months ago
- ☆47Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 6 months ago
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆24Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated 8 months ago
- Stick-breaking attention☆43Updated last month
- ☆18Updated last week
- ☆49Updated 7 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 8 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 3 months ago
- A large-scale RWKV v6, v7(World, ARWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy on docke…☆30Updated this week
- ☆61Updated 3 weeks ago
- Linear Attention Sequence Parallelism (LASP)☆77Updated 8 months ago
- Transformers components but in Triton☆31Updated 3 months ago
- ☆111Updated last week
- ☆99Updated 11 months ago
- Fast and memory-efficient exact attention☆58Updated this week
- Official Implementation Of The Paper: `DeciMamba: Exploring the Length Extrapolation Potential of Mamba'☆23Updated 6 months ago
- Vocabulary Parallelism☆17Updated 3 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆96Updated 6 months ago
- ☆46Updated last year
- Evaluating LLMs with Dynamic Data☆75Updated last week
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆42Updated 4 months ago
- A 20M RWKV v6 can do nonogram☆12Updated 4 months ago