friendliai / friendli-model-optimizer
FMO (Friendli Model Optimizer)
☆12Updated 3 months ago
Alternatives and similar repositories for friendli-model-optimizer:
Users that are interested in friendli-model-optimizer are comparing it to the libraries listed below
- ☆45Updated 7 months ago
- Friendli: the fastest serving engine for generative AI☆44Updated 3 months ago
- FriendliAI Model Hub☆92Updated 2 years ago
- Welcome to PeriFlow CLI ☁︎☆12Updated last year
- MIST: High-performance IoT Stream Processing☆17Updated 6 years ago
- Nemo: A flexible data processing system☆21Updated 7 years ago
- A performance library for machine learning applications.☆184Updated last year
- ☆101Updated last year
- ☆15Updated 3 years ago
- ☆25Updated 2 years ago
- ☆24Updated 6 years ago
- PyTorch CoreSIG☆55Updated 3 months ago
- Dotfile management with bare git☆19Updated 2 weeks ago
- ☆22Updated 5 years ago
- ☆28Updated 3 years ago
- Apache Nemo (Incubating) - Data Processing System for Flexible Employment With Different Deployment Characteristics☆111Updated last year
- ☆52Updated 5 months ago
- Lightweight and Parallel Deep Learning Framework☆261Updated 2 years ago
- ☆31Updated 2 years ago
- ☆22Updated 7 years ago
- Cruise: A Distributed Machine Learning Framework with Automatic System Configuration☆26Updated 6 years ago
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆38Updated this week
- Official Github repository for the SIGCOMM '24 paper "Accelerating Model Training in Multi-cluster Environments with Consumer-grade GPUs"☆71Updated 9 months ago
- Tiny configuration for Triton Inference Server☆45Updated 3 months ago
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year
- ☆83Updated last year
- ☆56Updated 2 years ago
- Study Group of Deep Learning Compiler☆158Updated 2 years ago
- ☆66Updated last month
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆116Updated last year