mani-kantap / llm-inference-solutionsLinks
A collection of all available inference solutions for the LLMs
☆90Updated 3 months ago
Alternatives and similar repositories for llm-inference-solutions
Users that are interested in llm-inference-solutions are comparing it to the libraries listed below
Sorting:
- ☆155Updated this week
- Easy and Efficient Quantization for Transformers☆199Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 8 months ago
- Inference server benchmarking tool☆73Updated last month
- ☆93Updated 3 weeks ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated last week
- ☆55Updated 9 months ago
- Comparison of Language Model Inference Engines☆217Updated 6 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆238Updated last year
- Benchmark suite for LLMs from Fireworks.ai☆76Updated 2 weeks ago
- ☆267Updated last week
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆213Updated 7 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆119Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆311Updated last month
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆172Updated 2 months ago
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆69Updated this week
- Cray-LM unified training and inference stack.☆22Updated 4 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆358Updated 10 months ago
- experiments with inference on llama☆104Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated last month
- A low-latency & high-throughput serving engine for LLMs☆379Updated 3 weeks ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆348Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆845Updated 9 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 9 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 11 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆206Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆304Updated 3 weeks ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆161Updated 3 weeks ago
- ☆53Updated last year
- ☆66Updated last year