meta-pytorch / torchforgeLinks
PyTorch-native post-training at scale
☆566Updated this week
Alternatives and similar repositories for torchforge
Users that are interested in torchforge are comparing it to the libraries listed below
Sorting:
- Scalable toolkit for efficient model reinforcement☆1,141Updated this week
- ☆610Updated this week
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆329Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆321Updated last month
- ☆937Updated last month
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆456Updated 2 weeks ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆864Updated last week
- Training API and CLI☆266Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated 3 weeks ago
- ☆225Updated 3 weeks ago
- Load compute kernels from the Hub☆347Updated this week
- Dion optimizer algorithm☆404Updated this week
- Async RL Training at Scale☆938Updated this week
- Simple & Scalable Pretraining for Neural Architecture Research☆305Updated last week
- Physics of Language Models, Part 4☆270Updated last week
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆209Updated this week
- Open-source framework for the research and development of foundation models.☆658Updated this week
- PyTorch building blocks for the OLMo ecosystem☆563Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆360Updated last year
- HuggingFace conversion and training library for Megatron-based models☆270Updated this week
- A Gym for Agentic LLMs☆404Updated last month
- PyTorch Single Controller☆928Updated this week
- Open-source release accompanying Gao et al. 2025☆218Updated last week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆261Updated this week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆582Updated last month
- ☆1,087Updated this week
- LLM KV cache compression made easy☆717Updated this week
- Normalized Transformer (nGPT)☆193Updated last year
- ☆465Updated 3 months ago
- Memory optimized Mixture of Experts☆69Updated 4 months ago