☆97Feb 11, 2026Updated last month
Alternatives and similar repositories for infllmv2_cuda_impl
Users that are interested in infllmv2_cuda_impl are comparing it to the libraries listed below
Sorting:
- ☆49Dec 13, 2025Updated 3 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆273Jul 6, 2025Updated 8 months ago
- Ongoing research project for code&math LLMs☆27Jul 4, 2025Updated 8 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆236Jan 14, 2026Updated 2 months ago
- Distributed IO-aware Attention algorithm☆24Sep 24, 2025Updated 5 months ago
- Efficient triton implementation of Native Sparse Attention.☆270May 23, 2025Updated 9 months ago
- Official Implementation of APB (ACL 2025 main Oral) and Spava.☆35Jan 30, 2026Updated last month
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Jan 24, 2025Updated last year
- Sequence-level 1F1B schedule for LLMs.☆19Jun 4, 2024Updated last year
- qwen-nsa☆87Oct 14, 2025Updated 5 months ago
- ☆38Aug 7, 2025Updated 7 months ago
- A Triton JIT runtime and ffi provider in C++☆32Updated this week
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆164Oct 13, 2025Updated 5 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Aug 9, 2025Updated 7 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- ☆52May 19, 2025Updated 10 months ago
- DLBlas: clean and efficient kernels☆35Updated this week
- ☆18Jun 3, 2024Updated last year
- Know2BIO: A Comprehensive Dual-View Benchmark for Evolving Biomedical Knowledge Graphs☆14Feb 10, 2026Updated last month
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated last year
- Heuristic filtering framework for RefineCode☆83Mar 13, 2025Updated last year
- Triton adapter for Ascend. Mirror of https://gitcode.com/ascend/triton-ascend☆115Updated this week
- ☆234Nov 19, 2025Updated 4 months ago
- ☆11Aug 4, 2024Updated last year
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆286May 1, 2025Updated 10 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Mar 25, 2025Updated 11 months ago
- ☆21Jun 5, 2025Updated 9 months ago
- ☆33Feb 3, 2025Updated last year
- ☆119May 19, 2025Updated 10 months ago
- from MHA, MQA, GQA to MLA by 苏剑林, with code☆45Feb 19, 2025Updated last year
- DICE: Detecting In-distribution Data Contamination with LLM's Internal State☆11Sep 21, 2024Updated last year
- A Survey of Efficient Attention Methods: Hardware-efficient, Sparse, Compact, and Linear Attention☆287Dec 1, 2025Updated 3 months ago
- some minitools for linux os that are program with python☆13Jun 20, 2017Updated 8 years ago
- A lightweight Inference Engine built for block diffusion models☆42Dec 9, 2025Updated 3 months ago
- THUIR website☆10Feb 23, 2026Updated 3 weeks ago
- KACC: A Multi-task Benchmark for Knowledge Abstraction, Concretization and Completion☆12Oct 21, 2021Updated 4 years ago
- VideoNSA: Native Sparse Attention Scales Video Understanding☆82Nov 16, 2025Updated 4 months ago
- 📚 LaTeX templates and tools for creating beautiful, structured documents 📝☆14Oct 24, 2025Updated 4 months ago
- Zero Bubble Pipeline Parallelism☆451May 7, 2025Updated 10 months ago