sandyresearch / chipmunk
🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× vs cuBLAS
☆58Updated last week
Alternatives and similar repositories for chipmunk
Users that are interested in chipmunk are comparing it to the libraries listed below
Sorting:
- Patch convolution to avoid large GPU memory usage of Conv2D☆87Updated 3 months ago
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆29Updated 5 months ago
- (WIP) Parallel inference for black-forest-labs' FLUX model.☆19Updated 6 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆124Updated this week
- A parallelism VAE avoids OOM for high resolution image generation☆61Updated 3 months ago
- ☆68Updated 4 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆147Updated this week
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆45Updated 9 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated 2 weeks ago
- ☆70Updated 3 months ago
- Quantized Attention on GPU☆45Updated 5 months ago
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆23Updated 2 months ago
- 16-fold memory access reduction with nearly no loss☆94Updated last month
- ☆129Updated 3 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆103Updated this week
- DeeperGEMM: crazy optimized version☆69Updated last week
- ☆58Updated 3 weeks ago
- ☆70Updated last week
- [WIP] Better (FP8) attention for Hopper☆30Updated 2 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆103Updated last week
- ☆165Updated 4 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 11 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆70Updated this week
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆93Updated last month
- [ECCV24] MixDQ: Memory-Efficient Few-Step Text-to-Image Diffusion Models with Metric-Decoupled Mixed Precision Quantization☆40Updated 5 months ago
- Code for data-aware compression of DeepSeek models☆24Updated last month
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 5 months ago
- ☆47Updated 9 months ago
- A sparse attention kernel supporting mix sparse patterns☆205Updated 3 months ago
- ☆49Updated last year