hpcaitech / ElixirLinks
Elixir: Train a Large Language Model on a Small GPU Cluster
☆15Updated 2 years ago
Alternatives and similar repositories for Elixir
Users that are interested in Elixir are comparing it to the libraries listed below
Sorting:
- ☆74Updated 5 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆126Updated 9 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆57Updated last year
- ☆121Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆77Updated last year
- ☆111Updated last year
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆68Updated 9 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆82Updated last year
- Training library for Megatron-based models☆74Updated this week
- A Python library transfers PyTorch tensors between CPU and NVMe☆121Updated 9 months ago
- GPTQ inference TVM kernel☆40Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆168Updated last year
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆216Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- Vocabulary Parallelism☆22Updated 6 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆43Updated 2 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆110Updated 6 months ago
- ☆86Updated 3 years ago
- GPU operators for sparse tensor operations☆34Updated last year
- Transformers components but in Triton☆34Updated 4 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆48Updated last year
- ☆78Updated 5 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆220Updated last year
- LLM Serving Performance Evaluation Harness☆79Updated 6 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆90Updated 2 years ago
- Framework to reduce autotune overhead to zero for well known deployments.☆82Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆116Updated 3 months ago
- A simple calculation for LLM MFU.☆44Updated last week
- Quantized Attention on GPU☆44Updated 9 months ago