XiongjieDai / GPU-Benchmarks-on-LLM-InferenceLinks
Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?
☆1,848Updated last year
Alternatives and similar repositories for GPU-Benchmarks-on-LLM-Inference
Users that are interested in GPU-Benchmarks-on-LLM-Inference are comparing it to the libraries listed below
Sorting:
- Large-scale LLM inference engine☆1,607Updated 3 weeks ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,379Updated 3 months ago
- Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization☆1,383Updated last year
- This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?☆1,390Updated last month
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,761Updated last week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,097Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,387Updated this week
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,025Updated last week
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,936Updated this week
- Optimizing inference proxy for LLMs☆3,221Updated last week
- Create Custom LLMs☆1,781Updated last month
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,395Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,289Updated 7 months ago
- LLMPerf is a library for validating and benchmarking LLMs☆1,062Updated last year
- VS Code extension for LLM-assisted code/text completion☆1,098Updated 3 weeks ago
- NVIDIA Linux open GPU with P2P support☆1,294Updated 6 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,904Updated 2 years ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,965Updated 3 months ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,877Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆610Updated 9 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆597Updated this week
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,575Updated 3 weeks ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,566Updated 6 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆753Updated this week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,170Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆900Updated last month
- Simple go utility to download HuggingFace Models and Datasets☆782Updated 3 months ago
- LM Studio Apple MLX engine☆841Updated this week
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆924Updated this week
- LLM Frontend in a single html file☆671Updated last week