friendliai / friendli-model-optimizerLinks
FMO (Friendli Model Optimizer)
☆12Updated 6 months ago
Alternatives and similar repositories for friendli-model-optimizer
Users that are interested in friendli-model-optimizer are comparing it to the libraries listed below
Sorting:
- ☆46Updated 10 months ago
- [⛔️ DEPRECATED] Friendli: the fastest serving engine for generative AI☆48Updated 2 weeks ago
- FriendliAI Model Hub☆91Updated 3 years ago
- Welcome to PeriFlow CLI ☁︎☆12Updated last year
- ☆25Updated 2 years ago
- ☆102Updated 2 years ago
- A performance library for machine learning applications.☆184Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆117Updated last year
- ☆24Updated 6 years ago
- Easy and Efficient Quantization for Transformers☆198Updated 2 weeks ago
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆44Updated this week
- ☆54Updated 8 months ago
- ☆73Updated last month
- Lightweight and Parallel Deep Learning Framework☆264Updated 2 years ago
- MIST: High-performance IoT Stream Processing☆17Updated 6 years ago
- ☆15Updated 4 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76Updated this week
- Study Group of Deep Learning Compiler☆161Updated 2 years ago
- PyTorch CoreSIG☆55Updated 6 months ago
- NEST Compiler☆116Updated 5 months ago
- ☆90Updated last year
- A low-latency & high-throughput serving engine for LLMs☆388Updated last month
- Official Github repository for the SIGCOMM '24 paper "Accelerating Model Training in Multi-cluster Environments with Consumer-grade GPUs"☆70Updated last year
- FuriosaAI SDK☆46Updated 11 months ago
- Tiny configuration for Triton Inference Server☆45Updated 6 months ago
- Nemo: A flexible data processing system☆21Updated 7 years ago
- Dotfile management with bare git☆19Updated this week
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆57Updated last year
- OwLite is a low-code AI model compression toolkit for AI models.☆46Updated last month
- ☆56Updated 2 years ago