friendliai / friendli-model-optimizerLinks
FMO (Friendli Model Optimizer)
☆12Updated 8 months ago
Alternatives and similar repositories for friendli-model-optimizer
Users that are interested in friendli-model-optimizer are comparing it to the libraries listed below
Sorting:
- ☆47Updated last year
- [⛔️ DEPRECATED] Friendli: the fastest serving engine for generative AI☆48Updated 2 months ago
- Welcome to PeriFlow CLI ☁︎☆12Updated 2 years ago
- FriendliAI Model Hub☆91Updated 3 years ago
- A performance library for machine learning applications.☆184Updated last year
- ☆103Updated 2 years ago
- ☆54Updated 10 months ago
- ☆25Updated 2 years ago
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆49Updated 2 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- ☆24Updated 6 years ago
- Lightweight and Parallel Deep Learning Framework☆264Updated 2 years ago
- Easy and Efficient Quantization for Transformers☆203Updated 2 months ago
- MIST: High-performance IoT Stream Processing☆17Updated 6 years ago
- ☆15Updated 4 years ago
- ☆73Updated 3 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆83Updated this week
- A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.☆83Updated 2 months ago
- OSLO: Open Source for Large-scale Optimization☆175Updated 2 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆127Updated 9 months ago
- PyTorch CoreSIG☆56Updated 8 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆147Updated last week
- OwLite is a low-code AI model compression toolkit for AI models.☆50Updated 4 months ago
- ☆12Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆416Updated 3 months ago
- ☆90Updated last year
- Command-line utility for monitoring GPU hardware.☆86Updated last week
- A low-latency & high-throughput serving engine for LLMs☆418Updated 3 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆211Updated 2 weeks ago
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year