NVIDIA-NeMo / AutomodelLinks
DTensor-native pretraining and fine-tuning for LLMs/VLMs with day-0 Hugging Face support, GPU-accelerated, and memory efficient.
β71Updated this week
Alternatives and similar repositories for Automodel
Users that are interested in Automodel are comparing it to the libraries listed below
Sorting:
- π Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.β209Updated last week
- π Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flashβ¦β265Updated last month
- Load compute kernels from the Hubβ271Updated this week
- β124Updated 3 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mindβ¦β161Updated 2 months ago
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"β71Updated 5 months ago
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.β86Updated this week
- Flash-Muon: An Efficient Implementation of Muon Optimizerβ181Updated 3 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Trainingβ216Updated last year
- β110Updated last year
- Megatron's multi-modal data loaderβ243Updated last week
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top ofβ¦β145Updated last year
- β118Updated last year
- Triton-based implementation of Sparse Mixture of Experts.β238Updated 3 weeks ago
- ring-attention experimentsβ150Updated 10 months ago
- This repository contains the experimental PyTorch native float8 training UXβ224Updated last year
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)β210Updated this week
- Training library for Megatron-based modelsβ61Updated this week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)β395Updated 2 weeks ago
- The evaluation framework for training-free sparse attention in LLMsβ91Updated 2 months ago
- Applied AI experiments and examples for PyTorchβ295Updated 3 weeks ago
- β234Updated this week
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS β¦β60Updated 11 months ago
- β216Updated 7 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsityβ82Updated last year
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.β140Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.β269Updated last month
- A tool to configure, launch and manage your machine learning experiments.β190Updated this week
- β168Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on diskβ157Updated this week