hpcaitech / ElixirLinks
Elixir: Train a Large Language Model on a Small GPU Cluster
☆15Updated 2 years ago
Alternatives and similar repositories for Elixir
Users that are interested in Elixir are comparing it to the libraries listed below
Sorting:
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆137Updated last year
- ☆71Updated 10 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated 2 years ago
- ☆125Updated last year
- ☆27Updated 2 years ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Updated last year
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Updated last year
- ☆115Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- ☆89Updated 3 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆125Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆93Updated 2 years ago
- ☆84Updated 9 months ago
- ☆94Updated 3 years ago
- (NeurIPS 2022) Automatically finding good model-parallel strategies, especially for complex models and clusters.☆44Updated 3 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Updated last year
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆96Updated last year
- GPTQ inference TVM kernel☆41Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆113Updated 10 months ago
- Vortex: A Flexible and Efficient Sparse Attention Framework☆45Updated last week
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Transformers components but in Triton☆34Updated 8 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆119Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated 2 years ago
- Vocabulary Parallelism☆25Updated 10 months ago
- LLM Serving Performance Evaluation Harness☆83Updated 11 months ago
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- ☆61Updated 2 years ago
- Triton implementation of Flash Attention2.0☆47Updated 2 years ago