friendliai / friendli-model-optimizer
FMO (Friendli Model Optimizer)
☆12Updated 3 weeks ago
Alternatives and similar repositories for friendli-model-optimizer:
Users that are interested in friendli-model-optimizer are comparing it to the libraries listed below
- ☆43Updated 4 months ago
- Friendli: the fastest serving engine for generative AI☆42Updated last week
- Welcome to PeriFlow CLI ☁︎☆12Updated last year
- FriendliAI Model Hub☆89Updated 2 years ago
- MIST: High-performance IoT Stream Processing☆17Updated 5 years ago
- ☆102Updated last year
- ☆25Updated 6 years ago
- ☆15Updated 3 years ago
- ☆25Updated 2 years ago
- Nemo: A flexible data processing system☆22Updated 6 years ago
- Cruise: A Distributed Machine Learning Framework with Automatic System Configuration☆26Updated 5 years ago
- Dotfile management with bare git☆19Updated this week
- A performance library for machine learning applications.☆183Updated last year
- ☆47Updated 2 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆114Updated 10 months ago
- PyTorch CoreSIG☆54Updated last month
- ☆22Updated 7 years ago
- ☆22Updated 5 years ago
- Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access (ACM EuroSys '23)☆55Updated 10 months ago
- Lightweight and Parallel Deep Learning Framework☆263Updated 2 years ago
- ☆28Updated 3 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆50Updated this week
- OwLite is a low-code AI model compression toolkit for AI models.☆39Updated 4 months ago
- Network Contention-Aware Cluster Scheduling with Reinforcement Learning (IEEE ICPADS 2023)☆15Updated 3 months ago
- ☆56Updated 2 years ago
- ☆83Updated 10 months ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆80Updated 3 weeks ago
- "JABAS: Joint Adaptive Batching and Automatic Scaling for DNN Training on Heterogeneous GPUs" (EuroSys '25)☆12Updated last week
- ☆12Updated 4 months ago
- ☆31Updated 2 years ago