Michaelvll / llm-ie-benchmarksLinks
A collection of reproducible inference engine benchmarks
☆37Updated 6 months ago
Alternatives and similar repositories for llm-ie-benchmarks
Users that are interested in llm-ie-benchmarks are comparing it to the libraries listed below
Sorting:
- Make triton easier☆48Updated last year
- Benchmark suite for LLMs from Fireworks.ai☆83Updated 2 weeks ago
- Simple high-throughput inference library☆149Updated 6 months ago
- DPO, but faster 🚀☆46Updated 11 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated 3 weeks ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Updated 2 years ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Train, tune, and infer Bamba model☆136Updated 5 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 3 weeks ago
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- Some microbenchmarks and design docs before commencement☆12Updated 4 years ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆68Updated 11 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- ☆31Updated 6 months ago
- Utilities for Training Very Large Models☆58Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- ☆65Updated 7 months ago
- [WIP] Better (FP8) attention for Hopper☆32Updated 8 months ago
- ☆54Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 4 months ago
- ☆39Updated last year
- ☆60Updated 5 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆45Updated last year
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆51Updated 2 weeks ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆167Updated this week