00ffcc / chunkRWKV6
continous batching and parallel acceleration for RWKV6
☆24Updated 9 months ago
Alternatives and similar repositories for chunkRWKV6:
Users that are interested in chunkRWKV6 are comparing it to the libraries listed below
- ☆30Updated 10 months ago
- 🔥 A minimal training framework for scaling FLA models☆82Updated this week
- ☆19Updated 2 weeks ago
- ☆71Updated last week
- ☆52Updated 8 months ago
- A large-scale RWKV v6, v7(World, ARWKV, PRWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy o…☆33Updated last week
- Here we will test various linear attention designs.☆60Updated 11 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆64Updated 11 months ago
- ☆36Updated last week
- ☆70Updated 2 weeks ago
- Odysseus: Playground of LLM Sequence Parallelism☆66Updated 9 months ago
- ☆102Updated last year
- ☆22Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆77Updated 4 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆62Updated last week
- ☆18Updated last week
- A 20M RWKV v6 can do nonogram☆13Updated 5 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆81Updated 4 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 7 months ago
- Linear Attention Sequence Parallelism (LASP)☆79Updated 9 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆102Updated this week
- ☆65Updated 2 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆103Updated 7 months ago
- ☆47Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 4 months ago
- qwen-nsa☆42Updated last week
- Stick-breaking attention☆49Updated 2 weeks ago
- Triton implement of bi-directional (non-causal) linear attention☆44Updated last month
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆65Updated 3 months ago
- An innovative method expediting LLMs via streamlined semi-autoregressive generation and draft verification.☆24Updated last year