☆119May 19, 2025Updated 11 months ago
Alternatives and similar repositories for AttentionEngine
Users that are interested in AttentionEngine are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated this week
- ☆52May 19, 2025Updated 11 months ago
- High-performance LLM operator library built on TileLang.☆104Updated this week
- ☆67Apr 26, 2025Updated last year
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 5 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆36Mar 7, 2025Updated last year
- ☆32Jul 2, 2025Updated 9 months ago
- ☆38Aug 7, 2025Updated 8 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆108Jun 28, 2025Updated 10 months ago
- Tile-based language built for AI computation across all scales☆141Mar 27, 2026Updated last month
- Fast and memory-efficient exact attention☆75Mar 3, 2025Updated last year
- Building the Virtuous Cycle for AI-driven LLM Systems☆226Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆99Sep 19, 2025Updated 7 months ago
- ☆44Oct 15, 2025Updated 6 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Implement Flash Attention using Cute.☆105Dec 17, 2024Updated last year
- ☆13Dec 9, 2024Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,414Apr 22, 2026Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆760Aug 6, 2025Updated 8 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆151May 10, 2025Updated 11 months ago
- ☆264Jul 11, 2024Updated last year
- My tests and experiments with some popular dl frameworks.☆17Sep 11, 2025Updated 7 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆480May 30, 2025Updated 10 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A Python-embedded DSL that makes it easy to write fast, scalable ML kernels with minimal boilerplate.☆850Updated this week
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆57Feb 24, 2026Updated 2 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,632Updated this week
- An experimental communicating attention kernel based on DeepEP.☆34Jul 29, 2025Updated 9 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆834Mar 6, 2025Updated last year
- ☆87Jan 23, 2025Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- DeeperGEMM: crazy optimized version☆86May 5, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Open ABI and FFI for Machine Learning Systems☆383Updated this week
- Debug print operator for cudagraph debugging☆15Aug 2, 2024Updated last year
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆787Apr 21, 2026Updated last week
- FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA.☆272Apr 22, 2026Updated last week
- ☆18Mar 4, 2025Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆954Mar 29, 2026Updated last month
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆193Jan 28, 2025Updated last year