thinking-machines-lab / batch_invariant_opsLinks
☆894Updated last week
Alternatives and similar repositories for batch_invariant_ops
Users that are interested in batch_invariant_ops are comparing it to the libraries listed below
Sorting:
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆820Updated this week
- Scalable toolkit for efficient model reinforcement☆1,009Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA (+ more DSLs)☆655Updated this week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,170Updated this week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆302Updated last week
- Async RL Training at Scale☆749Updated this week
- PyTorch-native post-training at scale☆509Updated this week
- ☆545Updated last month
- Open-source framework for the research and development of foundation models.☆600Updated this week
- LLM KV cache compression made easy☆680Updated this week
- ☆225Updated 3 weeks ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆920Updated 7 months ago
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆556Updated last month
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆564Updated 2 weeks ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆291Updated this week
- Muon is Scalable for LLM Training☆1,354Updated 3 months ago
- An extension of the nanoGPT repository for training small MOE models.☆210Updated 8 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆355Updated 11 months ago
- Load compute kernels from the Hub☆326Updated this week
- A project to improve skills of large language models☆608Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆450Updated 5 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆271Updated last week
- Training API☆202Updated 3 weeks ago
- Post-training with Tinker☆1,455Updated this week
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆241Updated this week
- Dion optimizer algorithm☆383Updated this week
- Physics of Language Models, Part 4☆255Updated 3 months ago
- slime is an LLM post-training framework for RL Scaling.☆2,407Updated last week
- ☆449Updated 2 months ago
- ☆1,073Updated last week