yanring / Megatron-MoE-ModelZooLinks
Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.
☆64Updated this week
Alternatives and similar repositories for Megatron-MoE-ModelZoo
Users that are interested in Megatron-MoE-ModelZoo are comparing it to the libraries listed below
Sorting:
- PyTorch bindings for CUTLASS grouped GEMM.☆110Updated 2 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆214Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆137Updated last month
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆97Updated this week
- Utility scripts for PyTorch (e.g. Memory profiler that understands more low-level allocations such as NCCL)☆49Updated 2 weeks ago
- A collection of memory efficient attention operators implemented in the Triton language.☆278Updated last year
- Estimate MFU for DeepSeekV3☆24Updated 7 months ago
- ☆123Updated 2 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆207Updated this week
- Pipeline Parallelism Emulation and Visualization☆60Updated 2 months ago
- Allow torch tensor memory to be released and resumed later☆115Updated 2 weeks ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆231Updated 2 weeks ago
- Odysseus: Playground of LLM Sequence Parallelism☆76Updated last year
- ☆110Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆261Updated last month
- Triton-based implementation of Sparse Mixture of Experts.☆233Updated this week
- TritonBench: Benchmarking Large Language Model Capabilities for Generating Triton Operators☆74Updated 2 months ago
- A Quirky Assortment of CuTe Kernels☆411Updated this week
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆133Updated 3 months ago
- ☆146Updated 5 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆165Updated last year
- ☆43Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆323Updated last month
- Applied AI experiments and examples for PyTorch☆291Updated this week
- 16-fold memory access reduction with nearly no loss☆104Updated 5 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆319Updated last year
- ☆92Updated 5 months ago
- ☆88Updated 9 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 11 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆132Updated last year