Fast and memory-efficient exact kmeans
☆541Apr 17, 2026Updated last week
Alternatives and similar repositories for flash-kmeans
Users that are interested in flash-kmeans are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An experimental communicating attention kernel based on DeepEP.☆34Jul 29, 2025Updated 9 months ago
- ☆36Sep 6, 2025Updated 7 months ago
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfo…☆121Apr 15, 2026Updated 2 weeks ago
- [ICML2025, NeurIPS2025 Spotlight] Sparse VideoGen 1 & 2: Accelerating Video Diffusion Transformers with Sparse Attention☆659Mar 6, 2026Updated last month
- Fast and memory-efficient exact attention☆21Apr 10, 2026Updated 2 weeks ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆297Nov 7, 2025Updated 5 months ago
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆94Apr 20, 2026Updated last week
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 7 months ago
- ☆52May 19, 2025Updated 11 months ago
- ☆66Apr 26, 2025Updated last year
- ☆63Jun 12, 2025Updated 10 months ago
- Debug print operator for cudagraph debugging☆15Aug 2, 2024Updated last year
- ☆245Nov 19, 2025Updated 5 months ago
- ☆12Apr 9, 2025Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- ☆13Dec 9, 2024Updated last year
- ☆140Aug 18, 2025Updated 8 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆53Mar 27, 2024Updated 2 years ago
- Distributed Compiler based on Triton for Parallel Systems☆1,414Apr 22, 2026Updated last week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆381Jul 10, 2025Updated 9 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆985Feb 25, 2026Updated 2 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆661Apr 22, 2026Updated last week
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆540Feb 10, 2025Updated last year
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆302Feb 24, 2026Updated 2 months ago
- patches for huggingface transformers to save memory☆36Jun 2, 2025Updated 10 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆294May 1, 2025Updated 11 months ago
- ☆38Aug 7, 2025Updated 8 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆795Apr 21, 2026Updated last week
- FlashTile is a CUDA Tile IR compiler that is compatible with NVIDIA's tileiras, targeting SM70 through SM121 NVIDIA GPUs.☆60Feb 6, 2026Updated 2 months ago
- [NeurIPS' 25] Benchmark for evaluating TTS models on complex prosodic, expressiveness, and linguistic challenges.☆207Dec 9, 2025Updated 4 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ☆119May 19, 2025Updated 11 months ago
- [arXiv 2026] Official PyTorch Repository for "Coarse-Guided Visual Generation via Weighted h-Transform Sampling"☆41Mar 16, 2026Updated last month
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆168Oct 13, 2025Updated 6 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆265Nov 18, 2024Updated last year
- Cute layout visualization☆37Jan 18, 2026Updated 3 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 9 months ago
- Official repository Flash Local Linear Attention☆23Apr 23, 2026Updated last week