Michaelvll / llm-ie-benchmarksLinks
A collection of reproducible inference engine benchmarks
☆38Updated 8 months ago
Alternatives and similar repositories for llm-ie-benchmarks
Users that are interested in llm-ie-benchmarks are comparing it to the libraries listed below
Sorting:
- Make triton easier☆49Updated last year
- Simple high-throughput inference library☆152Updated 7 months ago
- Some microbenchmarks and design docs before commencement☆12Updated 4 years ago
- DPO, but faster 🚀☆46Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Updated 2 years ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆63Updated last week
- Tooling for exact and MinHash deduplication of large-scale text datasets☆44Updated last week
- ☆63Updated 7 months ago
- Benchmark suite for LLMs from Fireworks.ai☆84Updated last month
- Using FlexAttention to compute attention with different masking patterns☆47Updated last year
- ☆31Updated 8 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- ☆66Updated 9 months ago
- ☆59Updated 2 years ago
- [WIP] Better (FP8) attention for Hopper☆32Updated 10 months ago
- ☆39Updated last year
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Updated last year
- ☆55Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year
- Simple and efficient DeepSeek V3 SFT using pipeline parallel and expert parallel, with both FP8 and BF16 trainings☆101Updated 4 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆15Updated 7 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆217Updated last month
- ☆54Updated last year
- Cray-LM unified training and inference stack.☆22Updated 10 months ago