Supercomputing-System-AI-Lab / MiLoLinks
Code repo for efficient quantized MoE inference with mixture of low-rank compensators
☆18Updated 2 months ago
Alternatives and similar repositories for MiLo
Users that are interested in MiLo are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆39Updated 2 months ago
- ☆62Updated last year
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆123Updated 4 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆90Updated 2 months ago
- 16-fold memory access reduction with nearly no loss☆99Updated 3 months ago
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆79Updated 4 months ago
- Accommodating Large Language Model Training over Heterogeneous Environment.☆24Updated 3 months ago
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆138Updated 11 months ago
- An experimentation platform for LLM inference optimisation☆31Updated 9 months ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆49Updated 2 weeks ago
- A resilient distributed training framework☆95Updated last year
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆154Updated last week
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆177Updated 8 months ago
- LLM Inference analyzer for different hardware platforms☆74Updated last month
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆39Updated last year
- ☆104Updated 7 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆255Updated 3 months ago
- nnScaler: Compiling DNN models for Parallel Training☆113Updated last week
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆88Updated 2 years ago
- LLM serving cluster simulator☆106Updated last year
- ☆148Updated 11 months ago
- A lightweight design for computation-communication overlap.☆143Updated last week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆163Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆100Updated 3 weeks ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆297Updated 7 months ago
- ☆10Updated 8 months ago
- LLM Inference with Microscaling Format☆23Updated 7 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆47Updated 7 months ago
- ☆19Updated 2 months ago
- Code release for AdapMoE accepted by ICCAD 2024☆26Updated 2 months ago