HanGuo97 / hiltLinks
☆37Updated 3 weeks ago
Alternatives and similar repositories for hilt
Users that are interested in hilt are comparing it to the libraries listed below
Sorting:
- DeeperGEMM: crazy optimized version☆73Updated 6 months ago
- ☆94Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆88Updated 2 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆102Updated 5 months ago
- Debug print operator for cudagraph debugging☆14Updated last year
- Building the Virtuous Cycle for AI-driven LLM Systems☆92Updated last week
- PyTorch bindings for CUTLASS grouped GEMM.☆131Updated 6 months ago
- ☆65Updated 7 months ago
- ☆51Updated 6 months ago
- Triton-based Symmetric Memory operators and examples☆63Updated last month
- Autonomous GPU Kernel Generation via Deep Agents☆163Updated last week
- extensible collectives library in triton☆91Updated 7 months ago
- An experimental communicating attention kernel based on DeepEP.☆34Updated 4 months ago
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆140Updated 2 weeks ago
- Github mirror of trition-lang/triton repo.☆98Updated last week
- This repository contains companion software for the Colfax Research paper "Categorical Foundations for CuTe Layouts".☆80Updated 2 months ago
- ☆39Updated 3 months ago
- ☆51Updated 6 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆294Updated this week
- Implement Flash Attention using Cute.☆97Updated 11 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆32Updated last week
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆63Updated last week
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆95Updated 5 months ago
- ☆22Updated 8 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆122Updated last year
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆76Updated 2 weeks ago
- ☆14Updated 3 weeks ago
- Distributed MoE in a Single Kernel [NeurIPS '25]☆145Updated 2 months ago
- A lightweight design for computation-communication overlap.☆188Updated last month
- ☆31Updated 4 months ago