alibaba / Pai-Megatron-PatchLinks
The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
☆1,367Updated this week
Alternatives and similar repositories for Pai-Megatron-Patch
Users that are interested in Pai-Megatron-Patch are comparing it to the libraries listed below
Sorting:
- Best practice for training LLaMA models in Megatron-LM☆659Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,166Updated last month
- FlagScale is a large model toolkit based on open-sourced projects.☆358Updated last week
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆2,005Updated last week
- Fast inference from large lauguage models via speculative decoding☆831Updated last year
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆874Updated last week
- Community maintained hardware plugin for vLLM on Ascend☆1,179Updated last week
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆2,736Updated last week
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,194Updated this week
- A flexible and efficient training framework for large-scale alignment tasks☆428Updated this week
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆991Updated 10 months ago
- Reproduce R1 Zero on Logic Puzzle☆2,400Updated 6 months ago
- LongBench v2 and LongBench (ACL 25'&24')☆983Updated 8 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆610Updated last month
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆1,762Updated last week
- slime is an LLM post-training framework for RL Scaling.☆2,091Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆573Updated 3 weeks ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,420Updated last year
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆407Updated last month
- A PyTorch Native LLM Training Framework☆872Updated 3 weeks ago
- Official Repo for Open-Reasoner-Zero☆2,046Updated 4 months ago
- Ring attention implementation with flash attention☆890Updated last month
- DLRover: An Automatic Distributed Deep Learning System☆1,561Updated last week
- O1 Replication Journey☆2,000Updated 8 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆3,708Updated this week
- Train a 1B LLM with 1T tokens from scratch by personal☆740Updated 5 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆1,851Updated last week
- ☆503Updated 3 weeks ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆437Updated 3 weeks ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆966Updated 3 weeks ago