Michaelvll / llm-ie-benchmarks
A collection of reproducible inference engine benchmarks
☆30Updated 3 weeks ago
Alternatives and similar repositories for llm-ie-benchmarks
Users that are interested in llm-ie-benchmarks are comparing it to the libraries listed below
Sorting:
- Make triton easier☆47Updated 11 months ago
- Using FlexAttention to compute attention with different masking patterns☆43Updated 7 months ago
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Updated last year
- Some microbenchmarks and design docs before commencement☆12Updated 4 years ago
- ☆27Updated 2 weeks ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆65Updated 5 months ago
- ☆54Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 10 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆55Updated last week
- FlexAttention w/ FlashAttention3 Support☆26Updated 7 months ago
- Utilities for Training Very Large Models☆58Updated 7 months ago
- DPO, but faster 🚀☆42Updated 5 months ago
- ☆25Updated 3 weeks ago
- ☆70Updated last week
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆41Updated last year
- ☆24Updated this week
- ☆45Updated 2 months ago
- Compression for Foundation Models☆31Updated last month
- Repository for CPU Kernel Generation for LLM Inference☆26Updated last year
- ☆13Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 5 months ago
- Benchmark suite for LLMs from Fireworks.ai☆72Updated this week
- ☆49Updated last year
- ☆15Updated last month
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated 2 weeks ago
- Vocabulary Parallelism☆19Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆73Updated 8 months ago
- Code for data-aware compression of DeepSeek models☆24Updated last month
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 11 months ago
- ☆20Updated 3 weeks ago