alibaba / Pai-Megatron-PatchLinks
The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
☆1,347Updated this week
Alternatives and similar repositories for Pai-Megatron-Patch
Users that are interested in Pai-Megatron-Patch are comparing it to the libraries listed below
Sorting:
- Best practice for training LLaMA models in Megatron-LM☆661Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,161Updated last month
- FlagScale is a large model toolkit based on open-sourced projects.☆353Updated last week
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆1,912Updated 2 weeks ago
- Community maintained hardware plugin for vLLM on Ascend☆1,128Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆851Updated last week
- A flexible and efficient training framework for large-scale alignment tasks☆425Updated this week
- LongBench v2 and LongBench (ACL 25'&24')☆970Updated 8 months ago
- Fast inference from large lauguage models via speculative decoding☆823Updated last year
- Distributed RL System for LLM Reasoning☆2,614Updated this week
- Reproduce R1 Zero on Logic Puzzle☆2,397Updated 6 months ago
- VeOmni: Scaling any Modality Model Training to any Accelerators with PyTorch native Training Framework☆1,087Updated 3 weeks ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆987Updated 9 months ago
- slime is a LLM post-training framework for RL Scaling.☆1,827Updated this week
- A streamlined and customizable framework for efficient large model evaluation and performance benchmarking☆1,690Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,417Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆608Updated 3 weeks ago
- A PyTorch Native LLM Training Framework☆865Updated last week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆562Updated last week
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆407Updated last month
- Ring attention implementation with flash attention☆866Updated last week
- Train a 1B LLM with 1T tokens from scratch by personal☆735Updated 4 months ago
- Accelerate inference without tears☆324Updated 6 months ago
- DLRover: An Automatic Distributed Deep Learning System☆1,552Updated this week
- Official Repo for Open-Reasoner-Zero☆2,039Updated 3 months ago
- Collaborative Training of Large Language Models in an Efficient Way☆416Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,780Updated last week
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆432Updated last week
- ☆497Updated last week
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆932Updated last week