The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )
☆230Dec 10, 2025Updated 2 months ago
Alternatives and similar repositories for RWKV-CUDA
Users that are interested in RWKV-CUDA are comparing it to the libraries listed below
Sorting:
- ☆125Dec 15, 2023Updated 2 years ago
- study of cutlass☆22Nov 10, 2024Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆312Jan 31, 2024Updated 2 years ago
- GoldFinch and other hybrid transformer components☆12Dec 9, 2025Updated 2 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆478Mar 15, 2024Updated last year
- HunyuanDiT with TensorRT and libtorch☆18May 22, 2024Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Apr 9, 2023Updated 2 years ago
- ☆27Feb 26, 2026Updated last week
- row-major matmul optimization☆707Feb 24, 2026Updated last week
- continous batching and parallel acceleration for RWKV6☆22Jun 28, 2024Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,564Mar 23, 2025Updated 11 months ago
- ☆176Jan 13, 2026Updated last month
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- A tool convert TensorRT engine/plan to a fake onnx☆41Nov 22, 2022Updated 3 years ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- RWKV in nanoGPT style☆196Jun 9, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated 2 weeks ago
- ☆12Dec 14, 2024Updated last year
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Aug 12, 2023Updated 2 years ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,393Feb 21, 2026Updated 2 weeks ago
- ☆32May 26, 2024Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Feb 21, 2022Updated 4 years ago
- JAX implementations of RWKV☆19Sep 26, 2023Updated 2 years ago
- DALLE-tools provided useful dataset utilities to improve you workflow with WebDatasets.☆14Mar 9, 2022Updated 3 years ago
- ☆11Jul 23, 2023Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Sep 14, 2022Updated 3 years ago
- Trying to deconstruct RWKV in understandable terms☆14May 6, 2023Updated 2 years ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Jan 12, 2026Updated last month
- Triton implement of bi-directional (non-causal) linear attention☆70Feb 22, 2026Updated last week
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- play gemm with tvm☆92Jul 22, 2023Updated 2 years ago
- ☆44Mar 29, 2023Updated 2 years ago
- This project demonstrates the computation process of the RWKV (Receptance Weighted Key Value) model through Excel spreadsheets.☆19Jun 7, 2025Updated 9 months ago
- The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.☆198Nov 9, 2023Updated 2 years ago
- ☆65Apr 26, 2025Updated 10 months ago
- LLaMa/RWKV onnx models, quantization and testcase☆366Jul 6, 2023Updated 2 years ago
- ☆29Oct 3, 2022Updated 3 years ago
- A simple high performance CUDA GEMM implementation.☆426Jan 4, 2024Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year