fw-ai / benchmarkLinks
Benchmark suite for LLMs from Fireworks.ai
☆84Updated last month
Alternatives and similar repositories for benchmark
Users that are interested in benchmark are comparing it to the libraries listed below
Sorting:
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆182Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆363Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆268Updated last month
- ☆56Updated last year
- ☆123Updated last year
- Easy and Efficient Quantization for Transformers☆202Updated 6 months ago
- LLM Serving Performance Evaluation Harness☆82Updated 10 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 3 months ago
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆267Updated this week
- vLLM performance dashboard☆39Updated last year
- vLLM adapter for a TGIS-compatible gRPC server.☆47Updated this week
- ☆120Updated last year
- KV cache compression for high-throughput LLM inference☆148Updated 11 months ago
- ☆206Updated 8 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆136Updated last year
- LM engine is a library for pretraining/finetuning LLMs☆108Updated this week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆325Updated 3 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- Data preparation code for Amber 7B LLM☆94Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆99Updated 2 years ago
- vLLM Router☆54Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆228Updated this week
- ☆219Updated 11 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- experiments with inference on llama☆103Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆276Updated this week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆153Updated last year