Ongoing research training transformer language models at scale, including: BERT & GPT-2
☆1,435Mar 20, 2024Updated last year
Alternatives and similar repositories for Megatron-DeepSpeed
Users that are interested in Megatron-DeepSpeed are comparing it to the libraries listed below
Sorting:
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,229Aug 14, 2025Updated 6 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,008Jul 29, 2024Updated last year
- Ongoing research training transformer models at scale☆15,242Feb 21, 2026Updated last week
- Example models using DeepSpeed☆6,785Feb 7, 2026Updated 3 weeks ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,741Jan 8, 2024Updated 2 years ago
- Best practice for training LLaMA models in Megatron-LM☆663Jan 2, 2024Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,394Mar 27, 2024Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,095Jun 30, 2025Updated 8 months ago
- Crosslingual Generalization through Multitask Finetuning☆537Sep 22, 2024Updated last year
- Fast Inference Solutions for BLOOM☆566Oct 9, 2024Updated last year
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries☆7,392Feb 3, 2026Updated 3 weeks ago
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,528Dec 15, 2025Updated 2 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,816Jun 17, 2025Updated 8 months ago
- Fast and memory-efficient exact attention☆22,361Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,648Updated this week
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,281Oct 16, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,170Feb 21, 2026Updated last week
- Instruction Tuning with GPT-4☆4,342Jun 11, 2023Updated 2 years ago
- GLM-130B: An Open Bilingual Pre-Trained Model (ICLR 2023)☆7,672Jul 25, 2023Updated 2 years ago
- A framework for few-shot evaluation of language models.☆11,478Feb 15, 2026Updated last week
- Toolkit for creating, sharing and using natural language prompts.☆2,996Oct 23, 2023Updated 2 years ago
- Repo for external large-scale work☆6,543Apr 27, 2024Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆318Mar 20, 2023Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,400Apr 26, 2025Updated 10 months ago
- GLM (General Language Model)☆3,437Nov 3, 2023Updated 2 years ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,576Mar 27, 2023Updated 2 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,678Updated this week
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Feb 21, 2026Updated last week
- distributed trainer for LLMs☆588May 20, 2024Updated last year
- ☆84Sep 9, 2023Updated 2 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,513Updated this week
- ☆1,560Feb 20, 2026Updated last week
- A large-scale 7B pretraining language model developed by BaiChuan-Inc.☆5,683Jul 18, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Jul 20, 2023Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Updated this week
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,923Dec 7, 2024Updated last year
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,353Feb 20, 2026Updated last week
- A fast MoE impl for PyTorch☆1,834Feb 10, 2025Updated last year