friendliai / friendli-model-optimizerLinks
FMO (Friendli Model Optimizer)
☆12Updated 7 months ago
Alternatives and similar repositories for friendli-model-optimizer
Users that are interested in friendli-model-optimizer are comparing it to the libraries listed below
Sorting:
- ☆47Updated 11 months ago
- [⛔️ DEPRECATED] Friendli: the fastest serving engine for generative AI☆48Updated 2 months ago
- Welcome to PeriFlow CLI ☁︎☆12Updated 2 years ago
- FriendliAI Model Hub☆91Updated 3 years ago
- A performance library for machine learning applications.☆184Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆119Updated last year
- ☆103Updated 2 years ago
- ☆25Updated 2 years ago
- Easy and Efficient Quantization for Transformers☆203Updated 2 months ago
- ☆26Updated last year
- Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.☆49Updated last month
- ☆54Updated 9 months ago
- ☆73Updated 3 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 11 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆145Updated this week
- Lightweight and Parallel Deep Learning Framework☆264Updated 2 years ago
- This is a fork of SGLang for hip-attention integration. Please refer to hip-attention for detail.☆17Updated 2 weeks ago
- Code for data-aware compression of DeepSeek models☆44Updated 2 months ago
- ☆15Updated 4 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆125Updated 8 months ago
- PyTorch CoreSIG☆56Updated 8 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆226Updated 9 months ago
- Examples for MS-AMP package.☆29Updated last month
- ☆24Updated 6 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆81Updated this week
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆113Updated last month
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆195Updated last week
- MIST: High-performance IoT Stream Processing☆17Updated 6 years ago
- Study Group of Deep Learning Compiler☆163Updated 2 years ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆374Updated last year