A high-throughput and memory-efficient inference and serving engine for LLMs
☆132Jun 25, 2024Updated last year
Alternatives and similar repositories for vllm-gptq
Users that are interested in vllm-gptq are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- QuIP quantization☆63Mar 17, 2024Updated 2 years ago
- MMLU eval for RU/EN☆15Jul 31, 2023Updated 2 years ago
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Sep 14, 2025Updated 6 months ago
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Feb 9, 2024Updated 2 years ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,318May 11, 2025Updated 10 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- fast-embeddings-api☆16Nov 23, 2023Updated 2 years ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,035Apr 11, 2025Updated 11 months ago
- Attend - to what matters.☆17Feb 22, 2025Updated last year
- The one who calls upon functions - Function-Calling Language Model☆36Oct 2, 2023Updated 2 years ago
- ☆22Mar 18, 2024Updated 2 years ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,468Mar 4, 2026Updated 3 weeks ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,711Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,912Sep 30, 2023Updated 2 years ago
- ☆13Dec 21, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- An implementation of MSSRM method☆11Mar 23, 2023Updated 3 years ago
- A Multi-Session and Multi-Therapy Benchmark for High-Realism AI Psychological Counselor☆33Jan 13, 2026Updated 2 months ago
- Completion After Prompt Probability. Make your LLM make a choice☆82Nov 2, 2024Updated last year
- Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, X…☆2,468Sep 26, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆141Dec 6, 2024Updated last year
- A high performance batching router optimises max throughput for text inference workload☆16Sep 6, 2023Updated 2 years ago
- ☆24Apr 9, 2024Updated last year
- Topic supervised non-negative matrix factorization with sparse matrices☆12Mar 24, 2020Updated 6 years ago
- oobaboga -text-generation-webui implementation of wafflecomposite - langchain-ask-pdf-local☆71May 9, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Categorize credit card transactions using a local large language model similar to GPT3☆15Dec 29, 2023Updated 2 years ago
- Boosting Natural Language Generation from Instructions with Meta-Learning☆11Dec 20, 2022Updated 3 years ago
- ☆129Dec 24, 2024Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆444Oct 11, 2024Updated last year
- A Simple webserver for generating text with exllamav2☆14Dec 18, 2023Updated 2 years ago
- A fast batching API to serve LLM models☆189Apr 26, 2024Updated last year
- 最简易的R1结果在小模型上的复现,阐述类O1与DeepSeek R1最重要的本质。Think is all your need。利用实验佐证,对于强推理能力,think思考过程性内容是AGI/ASI的核心。☆45Feb 8, 2025Updated last year
- ☆97Nov 6, 2024Updated last year
- accelerate generating vector by using onnx model☆18Jan 23, 2024Updated 2 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- This is an LLM interface that you can use to analyze and get insight into diary entries or other documents completely offline.☆16Dec 31, 2023Updated 2 years ago
- 🩹Editing large language models within 10 seconds⚡☆1,357Aug 13, 2023Updated 2 years ago
- OpenAI compatible API for TensorRT LLM triton backend☆219Aug 1, 2024Updated last year
- Code example for pretraining an LLM with vanilla PyTorch training loop☆10Jun 6, 2024Updated last year
- See https://github.com/cuda-mode/triton-index/ instead!☆11May 8, 2024Updated last year
- France compared to Italy☆10Jan 9, 2022Updated 4 years ago
- 4 bits quantization of LLaMA using GPTQ☆3,073Jul 13, 2024Updated last year