alpa-projects / alpaView external linksLinks
Training and serving large-scale neural networks with auto parallelization.
☆3,180Dec 9, 2023Updated 2 years ago
Alternatives and similar repositories for alpa
Users that are interested in alpa are comparing it to the libraries listed below
Sorting:
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,859Feb 7, 2026Updated last week
- Ongoing research training transformer models at scale☆15,162Updated this week
- Transformer related optimization, including BERT, GPT☆6,392Mar 27, 2024Updated last year
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,384Oct 28, 2024Updated last year
- PyTorch extensions for high performance and large scale training.☆3,397Apr 26, 2025Updated 9 months ago
- Development repository for the Triton language and compiler☆18,387Updated this week
- Repo for external large-scale work☆6,544Apr 27, 2024Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,152Feb 7, 2026Updated last week
- Fast and memory-efficient exact attention☆22,231Updated this week
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,093Jun 30, 2025Updated 7 months ago
- FlashInfer: Kernel Library for LLM Serving☆4,935Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,952Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,578Feb 7, 2026Updated last week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,704Jan 12, 2026Updated last month
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,586Jan 28, 2026Updated 2 weeks ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Sep 19, 2024Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,742Jan 8, 2024Updated 2 years ago
- A list of awesome compiler projects and papers for tensor computation and deep learning.☆2,728Oct 19, 2024Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆963Dec 21, 2025Updated last month
- A schedule language for large model training☆152Aug 21, 2025Updated 5 months ago
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,336Feb 5, 2026Updated last week
- Open Machine Learning Compiler Framework☆13,117Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,072Apr 17, 2024Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,888Updated this week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,439Updated this week
- Large Language Model Text Generation Inference☆10,757Jan 8, 2026Updated last month
- Serving multiple LoRA finetuned LLM as one☆1,139May 8, 2024Updated last year
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,266Updated this week
- Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, 20+ clouds, o…☆9,442Updated this week
- Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.☆41,176Feb 8, 2026Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆70,205Updated this week
- An open-source efficient deep learning framework/compiler, written in python.☆739Sep 4, 2025Updated 5 months ago
- Making large AI models cheaper, faster and more accessible☆41,346Jan 19, 2026Updated 3 weeks ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,491Feb 6, 2026Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,350Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,867Updated this week
- Zero Bubble Pipeline Parallelism☆449May 7, 2025Updated 9 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,224Aug 14, 2025Updated 6 months ago