fw-ai / benchmarkLinks
Benchmark suite for LLMs from Fireworks.ai
☆84Updated 3 weeks ago
Alternatives and similar repositories for benchmark
Users that are interested in benchmark are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 2 weeks ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆160Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆349Updated last week
- ☆56Updated last year
- Easy and Efficient Quantization for Transformers☆203Updated 5 months ago
- ☆122Updated last year
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆323Updated 2 months ago
- ☆219Updated 10 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆261Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inference☆62Updated 3 months ago
- LM engine is a library for pretraining/finetuning LLMs☆77Updated last week
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- ☆120Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆98Updated 2 years ago
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- KV cache compression for high-throughput LLM inference☆148Updated 10 months ago
- LLM Serving Performance Evaluation Harness☆82Updated 9 months ago
- vLLM adapter for a TGIS-compatible gRPC server.☆45Updated this week
- vLLM performance dashboard☆40Updated last year
- vLLM Router☆52Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 3 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- ☆206Updated 7 months ago
- ☆60Updated last year
- experiments with inference on llama☆103Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆272Updated this week
- ☆321Updated this week
- Data preparation code for Amber 7B LLM☆94Updated last year
- Inference server benchmarking tool☆130Updated 2 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆135Updated last year