☆33May 26, 2024Updated last year
Alternatives and similar repositories for flash-rwkv
Users that are interested in flash-rwkv are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- continous batching and parallel acceleration for RWKV6☆22Jun 28, 2024Updated last year
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆56Mar 31, 2026Updated 2 weeks ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- RADLADS training code☆39May 7, 2025Updated 11 months ago
- ☆27Feb 26, 2026Updated last month
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- GoldFinch and other hybrid transformer components☆46Jul 20, 2024Updated last year
- ☆125Dec 15, 2023Updated 2 years ago
- Julia implementation of flash-attention operation for neural networks.☆11May 31, 2023Updated 2 years ago
- ☆17Nov 17, 2023Updated 2 years ago
- RWKV centralised docs for the community☆32Jan 17, 2026Updated 2 months ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 3 years ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆34Aug 14, 2024Updated last year
- ☆51Jan 28, 2024Updated 2 years ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- ☆19Dec 24, 2024Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 9 months ago
- Mini Model Daemon☆13Nov 9, 2024Updated last year
- ☆13Mar 27, 2023Updated 3 years ago
- ☆176Jan 13, 2026Updated 3 months ago
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- ☆34Jul 21, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Fast modular code to create and train cutting edge LLMs☆68May 16, 2024Updated last year
- Bag of MLP☆20May 31, 2021Updated 4 years ago
- Train, tune, and infer Bamba model☆138Jun 4, 2025Updated 10 months ago
- Using FlexAttention to compute attention with different masking patterns☆47Sep 22, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- Optimized inference with Ascend and Hugging Face☆12Apr 23, 2024Updated last year
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging diff…☆66Mar 18, 2026Updated 3 weeks ago
- RWKV6 in native pytorch and triton:)☆11Aug 4, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆48Oct 21, 2025Updated 5 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Jun 17, 2024Updated last year
- A fast RWKV Tokenizer written in Rust☆54Aug 12, 2025Updated 8 months ago
- Direct Preference Optimization for RWKV, aiming for RWKV-5 and 6.☆11Mar 1, 2024Updated 2 years ago
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- A 20M RWKV v6 can do nonogram☆14Oct 18, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 10 months ago