XiongjieDai / GPU-Benchmarks-on-LLM-InferenceLinks
Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?
☆1,867Updated last year
Alternatives and similar repositories for GPU-Benchmarks-on-LLM-Inference
Users that are interested in GPU-Benchmarks-on-LLM-Inference are comparing it to the libraries listed below
Sorting:
- Large-scale LLM inference engine☆1,631Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,426Updated last month
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,209Updated last week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,115Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,906Updated 2 years ago
- This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?☆1,435Updated 2 months ago
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,629Updated last month
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆2,025Updated this week
- NVIDIA Linux open GPU with P2P support☆1,313Updated 7 months ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,979Updated 5 months ago
- Create Custom LLMs☆1,804Updated 2 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,305Updated 8 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,615Updated this week
- Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization☆1,388Updated last year
- LLMPerf is a library for validating and benchmarking LLMs☆1,080Updated last year
- llama.cpp fork with additional SOTA quants and improved performance☆1,553Updated this week
- ☆1,186Updated last month
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,804Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)☆907Updated last month
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆982Updated last week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆622Updated this week
- Optimizing inference proxy for LLMs☆3,288Updated last month
- Comparison of Language Model Inference Engines☆239Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,562Updated 10 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆777Updated this week
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆830Updated this week
- Implements harmful/harmless refusal removal using pure HF Transformers☆1,445Updated 2 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆323Updated last week
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆612Updated 11 months ago
- Simple go utility to download HuggingFace Models and Datasets☆813Updated 2 weeks ago