Examples of training models with hybrid parallelism using ColossalAI
☆339Mar 23, 2023Updated 2 years ago
Alternatives and similar repositories for ColossalAI-Examples
Users that are interested in ColossalAI-Examples are comparing it to the libraries listed below
Sorting:
- Performance benchmarking with ColossalAI☆38Jul 6, 2022Updated 3 years ago
- Scalable PaLM implementation of PyTorch☆190Dec 19, 2022Updated 3 years ago
- A collection of models built with ColossalAI☆32Nov 22, 2022Updated 3 years ago
- Sky Computing: Accelerating Geo-distributed Computing in Federated Learning☆90Nov 22, 2022Updated 3 years ago
- Large-scale model inference.☆627Sep 12, 2023Updated 2 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆125Nov 27, 2024Updated last year
- GPT Demo with hybrid distributed training☆10Dec 1, 2022Updated 3 years ago
- ☆30Sep 4, 2023Updated 2 years ago
- Optimizing AlphaFold Training and Inference on GPU Clusters☆612Jul 16, 2024Updated last year
- Making large AI models cheaper, faster and more accessible☆41,359Updated this week
- ☆24Nov 22, 2022Updated 3 years ago
- A memory efficient DLRM training solution using ColossalAI☆107Nov 22, 2022Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,435Mar 20, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,229Aug 14, 2025Updated 6 months ago
- Momentum Decoding: Open-ended Text Generation as Graph Exploration☆19Jan 27, 2023Updated 3 years ago
- A curated list of awesome projects and papers for distributed training or inference☆266Oct 8, 2024Updated last year
- Accuracy 77%. Large batch deep learning optimizer LARS for ImageNet with PyTorch and ResNet, using Horovod for distribution. Optional acc…☆38Jun 1, 2021Updated 4 years ago
- ☆78May 4, 2021Updated 4 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,741Jan 8, 2024Updated 2 years ago
- ☆28Jul 11, 2021Updated 4 years ago
- Example models using DeepSpeed☆6,785Feb 7, 2026Updated 2 weeks ago
- Codebase for Instruction Following without Instruction Tuning☆36Sep 24, 2024Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆478Mar 15, 2024Updated last year
- Ongoing research training transformer models at scale☆15,242Updated this week
- Mengzi Pretrained Models☆540Nov 29, 2022Updated 3 years ago
- Evaluation suite for large-scale language models.☆129Aug 15, 2021Updated 4 years ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆82Nov 19, 2024Updated last year
- [NIPS2023] RRHF & Wombat☆809Sep 22, 2023Updated 2 years ago
- Elixir: Train a Large Language Model on a Small GPU Cluster☆15Jun 8, 2023Updated 2 years ago
- Documentation for Colossal-AI☆23Jun 6, 2025Updated 8 months ago
- OSLO: Open Source for Large-scale Optimization☆175Sep 9, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,394Mar 27, 2024Updated last year
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,513Updated this week
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,303May 16, 2023Updated 2 years ago
- [ICLR 2026] Adaptive Social Learning via Mode Policy Optimization for Language Agents☆48Feb 2, 2026Updated 3 weeks ago
- ☆460Jun 9, 2024Updated last year
- Fengshenbang-LM(封神榜大模型)是IDEA研究院认知计算与自然语言研究中心主导的大模型开源体系,成为中文AIGC和认知智能的基础设施。☆4,147Aug 13, 2024Updated last year
- Training and serving large-scale neural networks with auto parallelization.☆3,183Dec 9, 2023Updated 2 years ago
- A utility for storing and reading files for Korean LM training 💾☆35Oct 15, 2025Updated 4 months ago