fw-ai / benchmark
Benchmark suite for LLMs from Fireworks.ai
☆69Updated last month
Alternatives and similar repositories for benchmark:
Users that are interested in benchmark are comparing it to the libraries listed below
- ☆48Updated 4 months ago
- ☆54Updated 6 months ago
- LLM Serving Performance Evaluation Harness☆70Updated 3 weeks ago
- KV cache compression for high-throughput LLM inference☆117Updated last month
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆48Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆109Updated 3 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆108Updated 9 months ago
- Easy and Efficient Quantization for Transformers☆192Updated last month
- The driver for LMCache core to run in vLLM☆34Updated last month
- experiments with inference on llama☆104Updated 9 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆115Updated 9 months ago
- A toolkit for fine-tuning, inferencing, and evaluating GreenBitAI's LLMs.☆79Updated last week
- ☆237Updated last week
- ☆179Updated 5 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆92Updated last year
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆212Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆121Updated 3 weeks ago
- ☆116Updated last year
- PyTorch building blocks for the OLMo ecosystem☆124Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 5 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- Experiments on speculative sampling with Llama models☆125Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆68Updated 6 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inference☆60Updated 3 months ago
- Data preparation code for Amber 7B LLM☆86Updated 10 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 5 months ago
- ☆170Updated last week
- vLLM performance dashboard☆23Updated 10 months ago