thinking-machines-lab / batch_invariant_opsLinks
☆843Updated last week
Alternatives and similar repositories for batch_invariant_ops
Users that are interested in batch_invariant_ops are comparing it to the libraries listed below
Sorting:
- Scalable toolkit for efficient model reinforcement☆956Updated this week
- Post-training with Tinker☆1,028Updated last week
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,060Updated this week
- LLM KV cache compression made easy☆660Updated last week
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆775Updated this week
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆538Updated 2 weeks ago
- Async RL Training at Scale☆709Updated this week
- ☆222Updated 3 weeks ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆296Updated 2 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆903Updated 7 months ago
- A project to improve skills of large language models☆587Updated this week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆539Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆612Updated 2 weeks ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆448Updated 5 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆270Updated 2 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆342Updated 10 months ago
- ☆534Updated last month
- Open-source framework for the research and development of foundation models.☆501Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆344Updated 5 months ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆836Updated last week
- [NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards☆1,194Updated 2 weeks ago
- An extension of the nanoGPT repository for training small MOE models.☆202Updated 7 months ago
- PyTorch building blocks for the OLMo ecosystem☆307Updated this week
- Large Context Attention☆743Updated last week
- Efficient LLM Inference over Long Sequences☆390Updated 3 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆236Updated this week
- Training API☆158Updated last week
- ☆441Updated last month
- [ICML 2024] CLLMs: Consistency Large Language Models☆405Updated 11 months ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆494Updated 8 months ago