friendliai / LLMServingPerfEvaluatorLinks
☆47Updated 11 months ago
Alternatives and similar repositories for LLMServingPerfEvaluator
Users that are interested in LLMServingPerfEvaluator are comparing it to the libraries listed below
Sorting:
- Welcome to PeriFlow CLI ☁︎☆12Updated 2 years ago
- FMO (Friendli Model Optimizer)☆12Updated 7 months ago
- ☆103Updated 2 years ago
- A performance library for machine learning applications.☆184Updated last year
- FriendliAI Model Hub☆91Updated 3 years ago
- ☆25Updated 2 years ago
- PyTorch CoreSIG☆56Updated 8 months ago
- ☆54Updated 9 months ago
- ☆73Updated 3 months ago
- ☆90Updated last year
- Official Github repository for the SIGCOMM '24 paper "Accelerating Model Training in Multi-cluster Environments with Consumer-grade GPUs"☆70Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆81Updated this week
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆119Updated last year
- Easy and Efficient Quantization for Transformers☆203Updated 2 months ago
- Study Group of Deep Learning Compiler☆163Updated 2 years ago
- [ACM EuroSys 2023] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated 3 weeks ago
- ☆56Updated 2 years ago
- MIST: High-performance IoT Stream Processing☆17Updated 6 years ago
- Network Contention-Aware Cluster Scheduling with Reinforcement Learning (IEEE ICPADS'23)☆16Updated last month
- ☆15Updated 4 years ago
- NEST Compiler☆117Updated 6 months ago
- A low-latency & high-throughput serving engine for LLMs☆408Updated 3 months ago
- Lightweight and Parallel Deep Learning Framework☆264Updated 2 years ago
- ☆58Updated 11 months ago
- 삼각형의 실전! Triton☆16Updated last year
- OwLite is a low-code AI model compression toolkit for AI models.☆50Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆407Updated 3 months ago
- ☆12Updated 4 months ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆449Updated 4 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆223Updated this week