infinigence / HamiltonAttentionLinks
☆33Updated last month
Alternatives and similar repositories for HamiltonAttention
Users that are interested in HamiltonAttention are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆48Updated last month
- NVSHMEM‑Tutorial: Build a DeepEP‑like GPU Buffer☆143Updated last month
- ☆65Updated 6 months ago
- A lightweight design for computation-communication overlap.☆183Updated last month
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆76Updated last week
- ☆31Updated 4 months ago
- Tile-based language built for AI computation across all scales☆78Updated this week
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 7 months ago
- a simple API to use CUPTI☆11Updated 2 months ago
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆78Updated this week
- [HPCA 2025] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆62Updated last week
- Implement Flash Attention using Cute.☆96Updated 11 months ago
- gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling☆43Updated last month
- Sequence-level 1F1B schedule for LLMs.☆32Updated 2 months ago
- ☆63Updated 5 months ago
- ☆39Updated 3 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Updated 11 months ago
- ☆56Updated last week
- ☆102Updated last year
- ☆34Updated 2 weeks ago
- An experimental communicating attention kernel based on DeepEP.☆34Updated 3 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆63Updated last year
- Debug print operator for cudagraph debugging☆14Updated last year
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆82Updated this week
- [NeurIPS'25 Spotlight] Adaptive Attention Sparsity with Hierarchical Top-p Pruning☆65Updated last week
- High performance Transformer implementation in C++.☆141Updated 9 months ago
- ☆50Updated 5 months ago
- nnScaler: Compiling DNN models for Parallel Training☆119Updated last month
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆186Updated 9 months ago