Experiments on Multi-Head Latent Attention
☆99Aug 19, 2024Updated last year
Alternatives and similar repositories for mla-experiments
Users that are interested in mla-experiments are comparing it to the libraries listed below
Sorting:
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- PyTorch implementation of the Flash Spectral Transform Unit.☆21Sep 19, 2024Updated last year
- Transformers components but in Triton☆34May 9, 2025Updated 9 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- Cute layout visualization☆30Jan 18, 2026Updated last month
- Triton for OpenCL backend, and use mlir-translate to get source OpenCL code☆24Aug 27, 2025Updated 6 months ago
- ☆52May 19, 2025Updated 9 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- FlashInfer Bench @ MLSys 2026: Building AI agents to write high performance GPU kernels☆141Feb 9, 2026Updated 3 weeks ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆57Nov 20, 2024Updated last year
- My tests and experiments with some popular dl frameworks.☆17Sep 11, 2025Updated 5 months ago
- An experimental communicating attention kernel based on DeepEP.☆35Jul 29, 2025Updated 7 months ago
- All-in-one benchmarking platform for evaluating LLM.☆15Nov 12, 2025Updated 3 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Code and data for paper "(How) do Language Models Track State?"☆20Mar 31, 2025Updated 11 months ago
- A Top-Down Profiler for GPU Applications☆22Feb 29, 2024Updated 2 years ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆33Nov 29, 2024Updated last year
- [ICLR'25] Code for KaSA, an official implementation of "KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models"☆20Jan 16, 2025Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆85Aug 4, 2025Updated 7 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆434Updated this week
- TiledKernel is a code generation library based on macro kernels and memory hierarchy graph data structure.☆19May 12, 2024Updated last year
- ☆39Dec 14, 2025Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆327Updated this week
- PTX-EMU is a simple emulator for CUDA program.☆38Apr 25, 2025Updated 10 months ago
- ☆115Aug 26, 2024Updated last year
- ☆65Apr 26, 2025Updated 10 months ago
- Implement Flash Attention using Cute.☆101Dec 17, 2024Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- ☆14Apr 14, 2025Updated 10 months ago
- ☆12Jul 7, 2022Updated 3 years ago
- Benchmarking Optimizers for LLM Pretraining☆52Dec 30, 2025Updated 2 months ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- A curated list of research papers, resources, and advancements on Diffusion Cache and related efficient diffusion model acceleration tech…☆74Nov 4, 2025Updated 4 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆262Nov 18, 2024Updated last year
- ☆23Jul 11, 2025Updated 7 months ago
- [ICLR'25] "Understanding Bottlenecks of State Space Models through the Lens of Recency and Over-smoothing" by Peihao Wang, Ruisi Cai, Yue…☆17Mar 21, 2025Updated 11 months ago
- ☆13Jan 7, 2025Updated last year
- ☆13Jun 18, 2024Updated last year