alibaba / Pai-Megatron-PatchLinks
The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.
☆1,405Updated last week
Alternatives and similar repositories for Pai-Megatron-Patch
Users that are interested in Pai-Megatron-Patch are comparing it to the libraries listed below
Sorting:
- Best practice for training LLaMA models in Megatron-LM☆659Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,178Updated 2 months ago
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆2,130Updated last week
- FlagScale is a large model toolkit based on open-sourced projects.☆364Updated last week
- Fast inference from large lauguage models via speculative decoding☆844Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆994Updated 10 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆436Updated last week
- LongBench v2 and LongBench (ACL 25'&24')☆1,005Updated 9 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆903Updated last week
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,247Updated this week
- Community maintained hardware plugin for vLLM on Ascend☆1,262Updated this week
- Reproduce R1 Zero on Logic Puzzle☆2,408Updated 7 months ago
- slime is an LLM post-training framework for RL Scaling.☆2,323Updated this week
- A streamlined and customizable framework for efficient large model (LLM, VLM, AIGC) evaluation and performance benchmarking.☆1,853Updated this week
- Efficient Training (including pre-training and fine-tuning) for Big Models☆612Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆586Updated 2 weeks ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆411Updated 2 months ago
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆2,903Updated this week
- A PyTorch Native LLM Training Framework☆879Updated last month
- Train a 1B LLM with 1T tokens from scratch by personal☆741Updated 6 months ago
- Ring attention implementation with flash attention☆903Updated last month
- Accelerate inference without tears☆364Updated 2 weeks ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,423Updated last year
- ☆749Updated last month
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆347Updated last year
- Official Repo for Open-Reasoner-Zero☆2,056Updated 5 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆709Updated 6 months ago
- ☆508Updated last month
- O1 Replication Journey☆2,003Updated 9 months ago
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆988Updated last week