Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support
☆368Mar 14, 2026Updated last week
Alternatives and similar repositories for Automodel
Users that are interested in Automodel are comparing it to the libraries listed below
Sorting:
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆509Updated this week
- Scalable toolkit for efficient model reinforcement☆1,418Updated this week
- Implementation from scratch in C of the Multi-head latent attention used in the Deepseek-v3 technical paper.☆18Jan 15, 2025Updated last year
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆50Aug 20, 2025Updated 7 months ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆96Mar 5, 2026Updated 2 weeks ago
- The tool facilitates debugging convergence issues and testing new algorithms and recipes for training LLMs using Nvidia libraries such as…☆19Sep 17, 2025Updated 6 months ago
- [Archived] For the latest updates and community contribution, please visit: https://github.com/Ascend/TransferQueue or https://gitcode.co…☆13Jan 16, 2026Updated 2 months ago
- Accelerating MoE with IO and Tile-aware Optimizations☆606Feb 27, 2026Updated 3 weeks ago
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 tr…☆328Nov 11, 2025Updated 4 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- Miles is an enterprise-facing reinforcement learning framework for LLM and VLM post-training, forked from and co-evolving with slime.☆974Updated this week
- ☆47May 20, 2025Updated 10 months ago
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆99Aug 20, 2025Updated 7 months ago
- Scalable data pre processing and curation toolkit for LLMs☆1,460Updated this week
- ☆32Apr 19, 2025Updated 11 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆59Oct 27, 2025Updated 4 months ago
- a simple API to use CUPTI☆10Aug 19, 2025Updated 7 months ago
- ☆31Dec 31, 2025Updated 2 months ago
- A high-performance RL training-inference weight synchronization framework, designed to enable second-level parameter updates from trainin…☆139Mar 11, 2026Updated last week
- Utility scripts for PyTorch (e.g. Make Perfetto show some disappearing kernels, Memory profiler that understands more low-level allocatio…☆93Sep 11, 2025Updated 6 months ago
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- A tool to configure, launch and manage your machine learning experiments.☆220Updated this week
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆482Mar 10, 2026Updated last week
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆91Jan 26, 2026Updated last month
- Byted PyTorch Distributed for Hyperscale Training of LLMs and RLs☆1,000Mar 3, 2026Updated 2 weeks ago
- A lightweight, user-friendly data-plane for LLM training.☆38Sep 10, 2025Updated 6 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week
- Ongoing research training transformer models at scale☆15,744Updated this week
- Ship correct and fast LLM kernels to PyTorch☆145Jan 14, 2026Updated 2 months ago
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆176Updated this week
- Large Context Attention☆769Oct 13, 2025Updated 5 months ago
- LoRAFusion: Efficient LoRA Fine-Tuning for LLMs☆25Sep 23, 2025Updated 5 months ago
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆66Mar 21, 2022Updated 3 years ago
- slime is an LLM post-training framework for RL Scaling.☆4,799Updated this week
- The official repo for "OpenMoE 2: Sparse Diffusion Language Models".☆53Dec 28, 2025Updated 2 months ago
- Implementation of Dual Learning NMT & Joint Training on tensorflow☆12Dec 29, 2018Updated 7 years ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆271Updated this week
- Large Gaussian Reconstruction Model for Efficient 3D Reconstruction and Generation☆17Apr 3, 2024Updated last year