thinking-machines-lab / batch_invariant_opsLinks
☆937Updated last month
Alternatives and similar repositories for batch_invariant_ops
Users that are interested in batch_invariant_ops are comparing it to the libraries listed below
Sorting:
- ☆610Updated last week
- Async RL Training at Scale☆950Updated this week
- Scalable toolkit for efficient model reinforcement☆1,141Updated this week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆327Updated last month
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆864Updated last week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,394Updated this week
- PyTorch-native post-training at scale☆572Updated this week
- LLM KV cache compression made easy☆717Updated last week
- Open-source framework for the research and development of foundation models.☆673Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆718Updated this week
- ☆565Updated 3 months ago
- PyTorch building blocks for the OLMo ecosystem☆612Updated this week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆565Updated 2 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆336Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆360Updated last year
- A project to improve skills of large language models☆715Updated this week
- ☆225Updated 3 weeks ago
- Physics of Language Models, Part 4☆270Updated 2 weeks ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆582Updated last month
- Training API and CLI☆266Updated last week
- An extension of the nanoGPT repository for training small MOE models.☆219Updated 9 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆261Updated this week
- Load compute kernels from the Hub☆352Updated this week
- Efficient LLM Inference over Long Sequences☆393Updated 5 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆463Updated 7 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated 3 weeks ago
- Muon is Scalable for LLM Training☆1,387Updated 4 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,283Updated last week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆944Updated 9 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆349Updated 7 months ago