BBuf / flash-rwkvView external linksLinks
☆32May 26, 2024Updated last year
Alternatives and similar repositories for flash-rwkv
Users that are interested in flash-rwkv are comparing it to the libraries listed below
Sorting:
- ☆27Jul 28, 2025Updated 6 months ago
- RADLADS training code☆36May 7, 2025Updated 9 months ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Jan 12, 2026Updated last month
- ☆17Nov 17, 2023Updated 2 years ago
- ☆20Dec 24, 2024Updated last year
- ☆125Dec 15, 2023Updated 2 years ago
- Display tensors directly from GPU☆11Oct 12, 2025Updated 4 months ago
- Julia implementation of flash-attention operation for neural networks.☆11May 31, 2023Updated 2 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 2 years ago
- Optimized inference with Ascend and Hugging Face☆12Apr 23, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- Experiments on the impact of depth in transformers and SSMs.☆40Oct 23, 2025Updated 3 months ago
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- ☆51Jan 28, 2024Updated 2 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Aug 12, 2024Updated last year
- RWKV6 in native pytorch and triton:)☆11Aug 4, 2024Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Aug 14, 2024Updated last year
- API to load and query documents using RAG☆14Sep 25, 2023Updated 2 years ago
- A Docker image with Llama Index, Lang Chain, and a few other popular AI packages installed by default☆12Nov 19, 2025Updated 2 months ago
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- Awesome Triton Resources☆39Apr 27, 2025Updated 9 months ago
- ☆34Jul 21, 2024Updated last year
- ☆13Mar 27, 2023Updated 2 years ago
- Bag of MLP☆20May 31, 2021Updated 4 years ago
- ☆16Dec 19, 2024Updated last year
- train with kittens!☆63Oct 25, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- ☆14Apr 19, 2024Updated last year
- [NeurIPS 2023] CircuitFormer: Circuit as Set of Points☆38Nov 22, 2023Updated 2 years ago
- ☆171Jan 13, 2026Updated last month
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- Fast modular code to create and train cutting edge LLMs☆68May 16, 2024Updated last year
- ☆45Nov 10, 2023Updated 2 years ago
- ☆20May 30, 2024Updated last year
- Fast low-bit matmul kernels in Triton☆429Feb 1, 2026Updated last week
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- Train, tune, and infer Bamba model☆137Jun 4, 2025Updated 8 months ago
- ☆40Jan 5, 2024Updated 2 years ago