mani-kantap / llm-inference-solutionsLinks
A collection of all available inference solutions for the LLMs
☆94Updated 10 months ago
Alternatives and similar repositories for llm-inference-solutions
Users that are interested in llm-inference-solutions are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last month
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆276Updated this week
- Comparison of Language Model Inference Engines☆238Updated last year
- ☆67Updated 9 months ago
- ☆51Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆190Updated this week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- Self-host LLMs with vLLM and BentoML☆163Updated last month
- A pipeline for LLM knowledge distillation☆112Updated 9 months ago
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆127Updated last year
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆325Updated 3 months ago
- ☆50Updated last year
- SGLang is fast serving framework for large language models and vision language models.☆31Updated last month
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- Google TPU optimizations for transformers models☆132Updated 3 weeks ago
- Self-host LLMs with LMDeploy and BentoML☆22Updated 2 weeks ago
- Benchmark suite for LLMs from Fireworks.ai☆84Updated last month
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMs☆314Updated 6 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆260Updated last year
- Easy and Efficient Quantization for Transformers☆202Updated 6 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- ☆322Updated this week
- ☆137Updated last year
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆368Updated last week
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- Simple examples using Argilla tools to build AI☆57Updated last year
- ☆275Updated last week
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆351Updated 8 months ago