goliaro / specinfer-ae
☆19Updated last year
Alternatives and similar repositories for specinfer-ae
Users that are interested in specinfer-ae are comparing it to the libraries listed below
Sorting:
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆130Updated 10 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆38Updated 4 months ago
- LLM serving cluster simulator☆99Updated last year
- ☆106Updated 2 weeks ago
- LLM Inference analyzer for different hardware platforms☆66Updated 2 weeks ago
- ☆53Updated last year
- ☆139Updated 10 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆50Updated 11 months ago
- This repository is established to store personal notes and annotated papers during daily research.☆120Updated 3 weeks ago
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆25Updated last year
- ☆13Updated 10 months ago
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆44Updated last year
- Summary of some awesome work for optimizing LLM inference☆72Updated last month
- ☆96Updated 6 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆44Updated last month
- Large Language Model (LLM) Serving Paper and Resource List☆22Updated 8 months ago
- ☆47Updated last year
- ☆60Updated 11 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆81Updated last year
- Curated collection of papers in MoE model inference☆167Updated 2 months ago
- Compiler for Dynamic Neural Networks☆46Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆14Updated 10 months ago
- ☆97Updated last year
- Explore Inter-layer Expert Affinity in MoE Model Inference☆9Updated last year
- ☆20Updated 11 months ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆40Updated 5 months ago
- ☆13Updated last month
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆113Updated 2 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆165Updated 6 months ago
- ☆36Updated 6 months ago