thinking-machines-lab / batch_invariant_opsLinks
☆945Updated 2 months ago
Alternatives and similar repositories for batch_invariant_ops
Users that are interested in batch_invariant_ops are comparing it to the libraries listed below
Sorting:
- ☆686Updated this week
- Scalable toolkit for efficient model reinforcement☆1,210Updated this week
- Async RL Training at Scale☆985Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,437Updated this week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆885Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆748Updated this week
- Training API and CLI☆305Updated 3 weeks ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆328Updated 2 months ago
- PyTorch-native post-training at scale☆585Updated this week
- Open-source framework for the research and development of foundation models.☆707Updated this week
- LLM KV cache compression made easy☆749Updated 3 weeks ago
- A project to improve skills of large language models☆756Updated this week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆576Updated 3 months ago
- ☆575Updated 3 months ago
- PyTorch building blocks for the OLMo ecosystem☆681Updated this week
- ☆224Updated last month
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆609Updated last week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆370Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆278Updated last month
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆343Updated 3 weeks ago
- Muon is Scalable for LLM Training☆1,397Updated 5 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆467Updated 7 months ago
- Efficient LLM Inference over Long Sequences☆393Updated 6 months ago
- Dion optimizer algorithm☆413Updated last week
- Recipes to scale inference-time compute of open models☆1,123Updated 7 months ago
- kernels, of the mega variety☆640Updated 3 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆467Updated 2 weeks ago
- Ring attention implementation with flash attention☆961Updated 4 months ago
- Training library for Megatron-based models with bi-directional Hugging Face conversion capability☆347Updated this week
- ☆465Updated 4 months ago