thinking-machines-lab / batch_invariant_opsLinks
☆917Updated last month
Alternatives and similar repositories for batch_invariant_ops
Users that are interested in batch_invariant_ops are comparing it to the libraries listed below
Sorting:
- Async RL Training at Scale☆867Updated this week
- Scalable toolkit for efficient model reinforcement☆1,048Updated this week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆851Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆313Updated last month
- Open-source framework for the research and development of foundation models.☆640Updated this week
- LLM KV cache compression made easy☆701Updated this week
- PyTorch building blocks for the OLMo ecosystem☆482Updated this week
- PyTorch-native post-training at scale☆549Updated this week
- ☆555Updated 2 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆360Updated 11 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,287Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA (+ more DSLs)☆683Updated this week
- A project to improve skills of large language models☆628Updated this week
- Training API and CLI☆238Updated last week
- ☆224Updated last week
- ☆317Updated this week
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,242Updated 3 weeks ago
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆561Updated last month
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆316Updated this week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆928Updated 8 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆347Updated 7 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆573Updated last month
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆456Updated 6 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated last week
- Load compute kernels from the Hub☆337Updated last week
- An extension of the nanoGPT repository for training small MOE models.☆215Updated 8 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆850Updated last month
- Muon is Scalable for LLM Training☆1,372Updated 4 months ago
- Physics of Language Models, Part 4☆262Updated 4 months ago
- [ICML 2024] CLLMs: Consistency Large Language Models☆406Updated last year