alibaba / Pai-Megatron-PatchLinks
The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
☆1,499Updated 3 weeks ago
Alternatives and similar repositories for Pai-Megatron-Patch
Users that are interested in Pai-Megatron-Patch are comparing it to the libraries listed below
Sorting:
- Best practice for training LLaMA models in Megatron-LM☆664Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,215Updated 4 months ago
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆2,570Updated this week
- FlagScale is a large model toolkit based on open-sourced projects.☆431Updated this week
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,504Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆984Updated this week
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,004Updated last year
- LongBench v2 and LongBench (ACL 25'&24')☆1,057Updated 11 months ago
- Fast inference from large lauguage models via speculative decoding☆875Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆447Updated 2 months ago
- slime is an LLM post-training framework for RL Scaling.☆3,224Updated this week
- Reproduce R1 Zero on Logic Puzzle☆2,425Updated 9 months ago
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆3,340Updated this week
- Community maintained hardware plugin for vLLM on Ascend☆1,532Updated this week
- Efficient Training (including pre-training and fine-tuning) for Big Models☆614Updated 2 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,426Updated last year
- Train a 1B LLM with 1T tokens from scratch by personal☆782Updated 8 months ago
- Ring attention implementation with flash attention☆957Updated 3 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆619Updated 2 weeks ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆915Updated last month
- A streamlined and customizable framework for efficient large model (LLM, VLM, AIGC) evaluation and performance benchmarking.☆2,215Updated this week
- Official Repo for Open-Reasoner-Zero☆2,085Updated 7 months ago
- ☆757Updated 2 weeks ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆471Updated last week
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,065Updated 2 weeks ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆4,371Updated last week
- CMMLU: Measuring massive multitask language understanding in Chinese☆798Updated last year
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆416Updated 4 months ago
- DLRover: An Automatic Distributed Deep Learning System☆1,619Updated this week
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆359Updated last year