hpcaitech / Elixir
Elixir: Train a Large Language Model on a Small GPU Cluster
☆14Updated last year
Alternatives and similar repositories for Elixir:
Users that are interested in Elixir are comparing it to the libraries listed below
- Transformers components but in Triton☆32Updated last month
- Vocabulary Parallelism☆17Updated last month
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 10 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆115Updated 4 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- ☆68Updated 3 weeks ago
- Quantized Attention on GPU☆45Updated 5 months ago
- A simple calculation for LLM MFU.☆34Updated last month
- Boosting 4-bit inference kernels with 2:4 Sparsity☆72Updated 7 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆44Updated 8 months ago
- ☆22Updated last year
- ☆103Updated 7 months ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆85Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆81Updated 5 months ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆64Updated 4 months ago
- GPTQ inference TVM kernel☆38Updated 11 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆113Updated 4 months ago
- ☆26Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆47Updated last year
- A minimal implementation of vllm.☆39Updated 8 months ago
- Repository for CPU Kernel Generation for LLM Inference☆26Updated last year
- 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆13Updated 10 months ago
- ☆20Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- ☆81Updated 3 weeks ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆39Updated last year
- ☆82Updated 3 years ago
- ☆54Updated last week
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 3 years ago
- Sequence-level 1F1B schedule for LLMs.☆17Updated 10 months ago