PanZaifeng / FastTree-ArtifactLinks
☆24Updated 6 months ago
Alternatives and similar repositories for FastTree-Artifact
Users that are interested in FastTree-Artifact are comparing it to the libraries listed below
Sorting:
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆60Updated last month
- ☆122Updated 11 months ago
- A lightweight design for computation-communication overlap.☆179Updated 3 weeks ago
- ☆152Updated last year
- [NeurIPS 2025] ClusterFusion: Expanding Operator Fusion Scope for LLM Inference via Cluster-Level Collective Primitive☆41Updated 2 weeks ago
- nnScaler: Compiling DNN models for Parallel Training☆118Updated 2 weeks ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 6 months ago
- ☆54Updated last year
- ☆72Updated last year
- Compiler for Dynamic Neural Networks☆46Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆221Updated 2 years ago
- A recommendation model kernel optimizing system☆10Updated 4 months ago
- High performance Transformer implementation in C++.☆135Updated 8 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆43Updated 9 months ago
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆70Updated last week
- ☆56Updated last year
- [EuroSys'25] Mist: Efficient Distributed Training of Large Language Models via Memory-Parallelism Co-Optimization☆18Updated 2 months ago
- Github mirror of trition-lang/triton repo.☆82Updated last week
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆154Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- ☆24Updated 6 months ago
- Tile-based language built for AI computation across all scales☆66Updated last week
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆66Updated 6 months ago
- Implement Flash Attention using Cute.☆96Updated 9 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆53Updated last year
- ☆64Updated 5 months ago
- LLM serving cluster simulator☆115Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆244Updated 3 months ago
- ☆75Updated 4 years ago
- Explore Inter-layer Expert Affinity in MoE Model Inference☆14Updated last year