InfiniTensor / ninetoothedLinks
A domain-specific language (DSL) based on Triton but providing higher-level abstractions.
☆24Updated this week
Alternatives and similar repositories for ninetoothed
Users that are interested in ninetoothed are comparing it to the libraries listed below
Sorting:
- DeepSeek-V3/R1 inference performance simulator☆154Updated 3 months ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆168Updated 9 months ago
- A simple calculation for LLM MFU.☆39Updated 4 months ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆95Updated last month
- ☆64Updated last year
- A lightweight design for computation-communication overlap.☆146Updated 3 weeks ago
- ☆117Updated this week
- Stateful LLM Serving☆76Updated 4 months ago
- 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆15Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆58Updated last month
- DeeperGEMM: crazy optimized version☆69Updated 2 months ago
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆62Updated last month
- Fast OS-level support for GPU checkpoint and restore☆212Updated 3 weeks ago
- LLM Serving Performance Evaluation Harness☆79Updated 4 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆41Updated 6 months ago
- ☆237Updated last month
- ☆106Updated 8 months ago
- ☆79Updated 3 months ago
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆53Updated 2 months ago
- ☆60Updated 2 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆90Updated 2 weeks ago
- The driver for LMCache core to run in vLLM☆44Updated 5 months ago
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆27Updated 7 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆27Updated 5 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆44Updated 3 weeks ago
- A resilient distributed training framework☆95Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆40Updated 2 months ago
- ☆49Updated last month
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆27Updated 6 months ago
- LLM Inference analyzer for different hardware platforms☆79Updated last week