Best practice for training LLaMA models in Megatron-LM
☆664Jan 2, 2024Updated 2 years ago
Alternatives and similar repositories for Megatron-LLaMA
Users that are interested in Megatron-LLaMA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The official repo of Pai-Megatron-Patch for LLM & VLM large scale training developed by Alibaba Cloud.☆1,563Dec 15, 2025Updated 4 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,247Aug 14, 2025Updated 8 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Jul 20, 2023Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,438Mar 20, 2024Updated 2 years ago
- Ongoing research training transformer models at scale☆16,203Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆84Sep 9, 2023Updated 2 years ago
- distributed trainer for LLMs☆589May 20, 2024Updated last year
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 11 months ago
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,009Mar 3, 2026Updated 2 months ago
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆419Aug 21, 2025Updated 8 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,312Updated this week
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Aug 19, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,001Dec 6, 2024Updated last year
- Example models using DeepSpeed☆6,820Mar 30, 2026Updated last month
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Microsoft Automatic Mixed Precision Library☆636Dec 1, 2025Updated 5 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆643Mar 4, 2024Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,415Mar 27, 2024Updated 2 years ago
- Fast and memory-efficient exact attention☆23,628Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,333Mar 6, 2025Updated last year
- Minimalistic large language model 3D-parallelism training☆2,674Apr 7, 2026Updated 3 weeks ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆666Jan 15, 2026Updated 3 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,678Mar 8, 2024Updated 2 years ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆4,036Updated this week
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Ring attention implementation with flash attention☆1,015Sep 10, 2025Updated 7 months ago
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,959Apr 20, 2026Updated 2 weeks ago
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆3,114Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,110Jun 30, 2025Updated 10 months ago
- DLRover: An Automatic Distributed Deep Learning System☆1,652Apr 15, 2026Updated 2 weeks ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Nov 21, 2023Updated 2 years ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆757Sep 27, 2024Updated last year
- A PyTorch native platform for training generative AI models☆5,286Updated this week
- Secrets of RLHF in Large Language Models Part I: PPO☆1,424Mar 3, 2024Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A LLaMA1/LLaMA12 Megatron implement.☆28Dec 13, 2023Updated 2 years ago
- Scalable toolkit for efficient model alignment☆853Oct 6, 2025Updated 6 months ago
- BELLE: Be Everyone's Large Language model Engine(开源中文对话大模型)☆8,284Oct 16, 2024Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,730Jun 25, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,441Updated this week
- verl/HybridFlow: A Flexible and Efficient RL Post-Training Framework☆21,046Updated this week
- PyTorch extensions for high performance and large scale training.☆3,409Apr 26, 2025Updated last year