thinking-machines-lab / batch_invariant_opsLinks
☆961Updated 3 months ago
Alternatives and similar repositories for batch_invariant_ops
Users that are interested in batch_invariant_ops are comparing it to the libraries listed below
Sorting:
- Miles is an enterprise-facing reinforcement learning framework for LLM and VLM post-training, forked from and co-evolving with slime.☆858Updated this week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆902Updated last week
- Async RL Training at Scale☆1,044Updated this week
- Scalable toolkit for efficient model reinforcement☆1,307Updated this week
- PyTorch-native post-training at scale☆613Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,547Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆792Updated 3 weeks ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆334Updated 3 months ago
- LLM KV cache compression made easy☆876Updated 2 weeks ago
- Training API and CLI☆330Updated this week
- ☆232Updated 2 months ago
- A project to improve skills of large language models☆813Updated this week
- ☆579Updated 4 months ago
- Open-source framework for the research and development of foundation models.☆752Updated this week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆627Updated 2 weeks ago
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆593Updated 4 months ago
- PyTorch building blocks for the OLMo ecosystem☆785Updated this week
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆361Updated last week
- Training library for Megatron-based models with bidirectional Hugging Face conversion capability☆419Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆371Updated last year
- Load compute kernels from the Hub☆397Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆280Updated 2 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,332Updated 3 weeks ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆964Updated last week
- Physics of Language Models: Part 4.2, Canon Layers at Scale where Synthetic Pretraining Resonates in Reality☆317Updated last month
- A Gym for Agentic LLMs☆444Updated 3 weeks ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆288Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆273Updated last week
- Muon is Scalable for LLM Training☆1,426Updated 6 months ago
- Dion optimizer algorithm☆431Updated 3 weeks ago