thinking-machines-lab / batch_invariant_opsLinks
☆773Updated 3 weeks ago
Alternatives and similar repositories for batch_invariant_ops
Users that are interested in batch_invariant_ops are comparing it to the libraries listed below
Sorting:
- Post-training with Tinker☆550Updated this week
- Scalable toolkit for efficient model reinforcement☆910Updated this week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆751Updated this week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆280Updated last month
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆535Updated 2 months ago
- Async RL Training at Scale☆650Updated this week
- ☆221Updated 7 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆906Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆581Updated 2 weeks ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆343Updated 9 months ago
- LLM KV cache compression made easy☆623Updated this week
- Open-source framework for the research and development of foundation models.☆462Updated this week
- An extension of the nanoGPT repository for training small MOE models.☆195Updated 6 months ago
- A project to improve skills of large language models☆568Updated this week
- Load compute kernels from the Hub☆290Updated last week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆443Updated 4 months ago
- ☆531Updated last week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆472Updated 2 weeks ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆877Updated 6 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆268Updated 2 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆540Updated 4 months ago
- PyTorch Single Controller☆425Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆219Updated this week
- ☆435Updated last month
- Physics of Language Models, Part 4☆247Updated 2 months ago
- Muon is Scalable for LLM Training☆1,318Updated 2 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆338Updated 5 months ago
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,161Updated this week
- Simple & Scalable Pretraining for Neural Architecture Research☆296Updated last month
- Dion optimizer algorithm☆360Updated this week