BlinkDL / RWKV-CUDAView external linksLinks
The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )
☆230Dec 10, 2025Updated 2 months ago
Alternatives and similar repositories for RWKV-CUDA
Users that are interested in RWKV-CUDA are comparing it to the libraries listed below
Sorting:
- ☆125Dec 15, 2023Updated 2 years ago
- study of cutlass☆22Nov 10, 2024Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Jan 31, 2024Updated 2 years ago
- GoldFinch and other hybrid transformer components☆12Dec 9, 2025Updated 2 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated last year
- HunyuanDiT with TensorRT and libtorch☆18May 22, 2024Updated last year
- Let us make Psychohistory (as in Asimov) a reality, and accessible to everyone. Useful for LLM grounding and games / fiction / business /…☆40Apr 9, 2023Updated 2 years ago
- ☆27Jul 28, 2025Updated 6 months ago
- row-major matmul optimization☆701Aug 20, 2025Updated 5 months ago
- continous batching and parallel acceleration for RWKV6☆22Jun 28, 2024Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,562Mar 23, 2025Updated 10 months ago
- ☆171Jan 13, 2026Updated last month
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 4 months ago
- A tool convert TensorRT engine/plan to a fake onnx☆41Nov 22, 2022Updated 3 years ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- RWKV in nanoGPT style☆197Jun 9, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Aug 12, 2023Updated 2 years ago
- ☆12Dec 14, 2024Updated last year
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,351Updated this week
- ☆32May 26, 2024Updated last year
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Feb 21, 2022Updated 3 years ago
- JAX implementations of RWKV☆19Sep 26, 2023Updated 2 years ago
- ☆11Jul 23, 2023Updated 2 years ago
- RWKV-v2-RNN trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.☆67Sep 14, 2022Updated 3 years ago
- Trying to deconstruct RWKV in understandable terms☆14May 6, 2023Updated 2 years ago
- Triton implement of bi-directional (non-causal) linear attention☆65Feb 2, 2026Updated last week
- deepstream + cuda,yolo26,yolo-master,yolo11,yolov8,sam,transformer, etc.☆35Feb 7, 2026Updated last week
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆54Jan 12, 2026Updated last month
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- play gemm with tvm☆91Jul 22, 2023Updated 2 years ago
- ☆44Mar 29, 2023Updated 2 years ago
- This project demonstrates the computation process of the RWKV (Receptance Weighted Key Value) model through Excel spreadsheets.☆18Jun 7, 2025Updated 8 months ago
- The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.☆198Nov 9, 2023Updated 2 years ago
- ☆65Apr 26, 2025Updated 9 months ago
- LLaMa/RWKV onnx models, quantization and testcase☆366Jul 6, 2023Updated 2 years ago
- ☆29Oct 3, 2022Updated 3 years ago
- A simple high performance CUDA GEMM implementation.☆426Jan 4, 2024Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year