friendliai / friendli-model-optimizerLinks
FMO (Friendli Model Optimizer)
☆13Updated last year
Alternatives and similar repositories for friendli-model-optimizer
Users that are interested in friendli-model-optimizer are comparing it to the libraries listed below
Sorting:
- ☆48Updated last year
- [⛔️ DEPRECATED] Friendli: the fastest serving engine for generative AI☆49Updated 6 months ago
- FriendliAI Model Hub☆92Updated 3 years ago
- A performance library for machine learning applications.☆185Updated 2 years ago
- Welcome to PeriFlow CLI ☁︎☆12Updated 2 years ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- ☆103Updated 2 years ago
- ☆51Updated last week
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆53Updated 5 months ago
- ☆26Updated 3 years ago
- Easy and Efficient Quantization for Transformers☆202Updated 6 months ago
- ☆24Updated 7 years ago
- ☆27Updated 2 years ago
- ☆56Updated last year
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆147Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆85Updated this week
- AIPerf is a comprehensive benchmarking tool that measures the performance of generative AI models served by your preferred inference solu…☆83Updated 2 weeks ago
- Lightweight and Parallel Deep Learning Framework☆263Updated 3 years ago
- ☆81Updated 7 months ago
- extensible collectives library in triton☆91Updated 9 months ago
- Perplexity open source garden for inference technology☆324Updated 2 weeks ago
- ☆71Updated 9 months ago
- OSLO: Open Source for Large-scale Optimization☆175Updated 2 years ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- Microsoft Collective Communication Library☆66Updated last year
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 6 months ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆207Updated this week
- 삼각형의 실전! Triton☆16Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆182Updated this week
- Thunder Research Group's Collective Communication Library☆46Updated 6 months ago