mani-kantap / llm-inference-solutionsLinks
A collection of all available inference solutions for the LLMs
☆91Updated 5 months ago
Alternatives and similar repositories for llm-inference-solutions
Users that are interested in llm-inference-solutions are comparing it to the libraries listed below
Sorting:
- Comparison of Language Model Inference Engines☆228Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Updated 10 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆222Updated this week
- Easy and Efficient Quantization for Transformers☆201Updated last month
- ☆289Updated 2 weeks ago
- ☆51Updated last year
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆491Updated last week
- Inference server benchmarking tool☆93Updated 3 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆309Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆210Updated last week
- Benchmark suite for LLMs from Fireworks.ai☆79Updated 3 weeks ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆88Updated this week
- Benchmarking the serving capabilities of vLLM☆48Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 11 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆70Updated last year
- Self-host LLMs with vLLM and BentoML☆140Updated 2 weeks ago
- A pipeline for LLM knowledge distillation☆108Updated 4 months ago
- Google TPU optimizations for transformers models☆118Updated 7 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- KV cache compression for high-throughput LLM inference☆134Updated 6 months ago
- ☆63Updated 4 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆327Updated 3 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆90Updated 3 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆83Updated 3 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last month
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆236Updated 9 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆199Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆200Updated last week
- OpenAI compatible API for TensorRT LLM triton backend☆213Updated last year