Implementation for FP8/INT8 Rollout for RL training without performence drop.
☆297Nov 7, 2025Updated 4 months ago
Alternatives and similar repositories for Flash-RL
Users that are interested in Flash-RL are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆234Nov 19, 2025Updated 4 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆131Jun 24, 2025Updated 8 months ago
- Official implementation for DenseMixer: Improving MoE Post-Training with Precise Router Gradient☆66Aug 3, 2025Updated 7 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Jul 4, 2025Updated 8 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆262Aug 9, 2025Updated 7 months ago
- Fast and memory-efficient exact kmeans☆330Mar 17, 2026Updated last week
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19May 8, 2025Updated 10 months ago
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆132Nov 10, 2025Updated 4 months ago
- Defeating the Training-Inference Mismatch via FP16☆183Nov 14, 2025Updated 4 months ago
- ☆38Aug 7, 2025Updated 7 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆67Oct 2, 2025Updated 5 months ago
- An Efficient and User-Friendly Scaling Library for Reinforcement Learning with Large Language Models☆2,989Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,394Mar 11, 2026Updated last week
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆380Feb 14, 2025Updated last year
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆204Updated this week
- High Performance KV Cache Store for LLM☆51Updated this week
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆67Oct 31, 2025Updated 4 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆229Jan 11, 2025Updated last year
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆977Feb 5, 2026Updated last month
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.☆4,855Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,630Updated this week
- DeeperGEMM: crazy optimized version☆75May 5, 2025Updated 10 months ago
- Fast low-bit matmul kernels in Triton☆438Feb 1, 2026Updated last month
- Code and data for paper "(How) do Language Models Track State?"☆22Mar 31, 2025Updated 11 months ago
- verl: Volcano Engine Reinforcement Learning for LLMs☆20,097Updated this week
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆482Mar 10, 2026Updated last week
- 🎓Automatically Update circult-eda-mlsys-tinyml Papers Daily using Github Actions (Update Every 8th hours)☆10Updated this week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆180Jul 12, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆377Jul 10, 2025Updated 8 months ago
- [ICLR 2026] RuleReasoner: Reinforced Rule-based Reasoning via Domain-aware Dynamic Sampling☆36Feb 25, 2026Updated 3 weeks ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,191Mar 16, 2026Updated last week
- Ring attention implementation with flash attention☆996Sep 10, 2025Updated 6 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆408Aug 13, 2024Updated last year
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆198Jul 7, 2025Updated 8 months ago
- ☆52May 19, 2025Updated 10 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Jul 4, 2025Updated 8 months ago
- APRIL: Active Partial Rollouts in Reinforcement Learning to Tame Long-tail Generation. A system-level optimization for scalable LLM tra…☆54Oct 11, 2025Updated 5 months ago
- ☆11Apr 3, 2023Updated 2 years ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago