friendliai / friendli-model-optimizerLinks
FMO (Friendli Model Optimizer)
☆12Updated 9 months ago
Alternatives and similar repositories for friendli-model-optimizer
Users that are interested in friendli-model-optimizer are comparing it to the libraries listed below
Sorting:
- ☆47Updated last year
- [⛔️ DEPRECATED] Friendli: the fastest serving engine for generative AI☆48Updated 3 months ago
- FriendliAI Model Hub☆91Updated 3 years ago
- Welcome to PeriFlow CLI ☁︎☆12Updated 2 years ago
- ☆103Updated 2 years ago
- A performance library for machine learning applications.☆184Updated 2 years ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- ☆25Updated 2 years ago
- ☆15Updated 4 years ago
- Easy and Efficient Quantization for Transformers☆203Updated 3 months ago
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆49Updated 2 months ago
- Lightweight and Parallel Deep Learning Framework☆264Updated 2 years ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆83Updated this week
- ☆73Updated 4 months ago
- ☆54Updated 10 months ago
- PyTorch CoreSIG☆57Updated 9 months ago
- ☆24Updated 6 years ago
- A low-latency & high-throughput serving engine for LLMs☆425Updated 4 months ago
- MIST: High-performance IoT Stream Processing☆17Updated 6 years ago
- ☆32Updated 10 months ago
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 6 months ago
- ☆91Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆130Updated 10 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆83Updated last year
- A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.☆91Updated 3 months ago
- OwLite is a low-code AI model compression toolkit for AI models.☆50Updated 4 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆424Updated 4 months ago
- Performant kernels for symmetric tensors☆15Updated last year
- ☆27Updated last year