yanring / Megatron-MoE-ModelZooLinks
Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.
☆161Updated 3 weeks ago
Alternatives and similar repositories for Megatron-MoE-ModelZoo
Users that are interested in Megatron-MoE-ModelZoo are comparing it to the libraries listed below
Sorting:
- PyTorch bindings for CUTLASS grouped GEMM.☆184Updated last month
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆143Updated 8 months ago
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆193Updated this week
- Zero Bubble Pipeline Parallelism☆449Updated 9 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆583Updated last week
- Allow torch tensor memory to be released and resumed later☆216Updated last month
- nnScaler: Compiling DNN models for Parallel Training☆124Updated 4 months ago
- Pipeline Parallelism Emulation and Visualization☆77Updated last month
- ☆155Updated 11 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆86Updated 5 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Updated 7 months ago
- Building the Virtuous Cycle for AI-driven LLM Systems☆164Updated this week
- Autonomous GPU Kernel Generation via Deep Agents☆233Updated this week
- ☆89Updated 3 years ago
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆419Updated this week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆233Updated 2 years ago
- LLM training technologies developed by kwai☆70Updated 3 weeks ago
- Applied AI experiments and examples for PyTorch☆315Updated 5 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆258Updated 6 months ago
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆269Updated last week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆219Updated last week
- Sequence-level 1F1B schedule for LLMs.☆38Updated 5 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Updated last year
- ☆47Updated last year
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆683Updated this week
- ☆159Updated last year
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆363Updated 9 months ago
- ☆352Updated last year