ray-project / llmperf-leaderboard
☆444Updated last year
Alternatives and similar repositories for llmperf-leaderboard:
Users that are interested in llmperf-leaderboard are comparing it to the libraries listed below
- LLMPerf is a library for validating and benchmarking LLMs☆710Updated last month
- ☆497Updated 5 months ago
- NexusRaven-13B, a new SOTA Open-Source LLM for function calling. This repo contains everything for reproducing our evaluation on NexusRav…☆311Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,018Updated 8 months ago
- Comparison of Language Model Inference Engines☆203Updated last month
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆894Updated this week
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆185Updated last month
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated 5 months ago
- Automatically evaluate your LLMs in Google Colab☆584Updated 8 months ago
- ☆218Updated this week
- batched loras☆338Updated last year
- RayLLM - LLMs on Ray☆1,250Updated 8 months ago
- A throughput-oriented high-performance serving framework for LLMs☆714Updated 4 months ago
- Efficient, Flexible and Portable Structured Generation☆619Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆737Updated 2 weeks ago
- A bagel, with everything.☆315Updated 9 months ago
- Tutorial for building LLM router☆173Updated 6 months ago
- Benchmark suite for LLMs from Fireworks.ai☆64Updated last month
- A collection of all available inference solutions for the LLMs☆76Updated 4 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆284Updated this week
- ☆52Updated 4 months ago
- ☆199Updated 11 months ago
- Fast parallel LLM inference for MLX☆153Updated 6 months ago
- experiments with inference on llama☆104Updated 7 months ago
- Scale LLM Engine public repository☆789Updated this week
- [ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling☆1,590Updated 6 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆687Updated 9 months ago
- ☆412Updated last year
- A tool for evaluating LLMs☆400Updated 8 months ago