NVIDIA-NeMo / AutomodelLinks
Pytorch DTensor native training library for LLMs/VLMs with OOTB Hugging Face support
☆135Updated this week
Alternatives and similar repositories for Automodel
Users that are interested in Automodel are comparing it to the libraries listed below
Sorting:
- Training library for Megatron-based models☆125Updated last week
- Megatron's multi-modal data loader☆252Updated last week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆270Updated 3 months ago
- Load compute kernels from the Hub☆304Updated last week
- ring-attention experiments☆154Updated last year
- ☆130Updated 4 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆215Updated last week
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆216Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆195Updated 4 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆246Updated 3 weeks ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- ☆112Updated last year
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆111Updated last week
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆46Updated 3 months ago
- ☆121Updated last year
- How to ensure correctness and ship LLM generated kernels in PyTorch☆107Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆130Updated 10 months ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated last month
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 4 months ago
- The evaluation framework for training-free sparse attention in LLMs☆101Updated last week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆249Updated 3 months ago
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆86Updated 2 weeks ago
- Vocabulary Parallelism☆23Updated 7 months ago
- Memory optimized Mixture of Experts☆68Updated 3 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆283Updated this week
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆147Updated last year
- Make SGLang go brrr☆37Updated 3 weeks ago
- Triton-based Symmetric Memory operators and examples☆48Updated last week