yaof20 / Flash-RLLinks
Implementation for FP8/INT8 Rollout for RL training without performence drop.
☆253Updated 2 weeks ago
Alternatives and similar repositories for Flash-RL
Users that are interested in Flash-RL are comparing it to the libraries listed below
Sorting:
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆132Updated this week
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆219Updated this week
- 🔥 A minimal training framework for scaling FLA models☆265Updated last month
- ☆118Updated 4 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆241Updated 2 months ago
- Async pipelined version of Verl☆119Updated 6 months ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆68Updated last month
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆143Updated this week
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆156Updated 3 weeks ago
- ☆129Updated 4 months ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆320Updated 5 months ago
- ☆91Updated 7 months ago
- Efficient triton implementation of Native Sparse Attention.☆235Updated 4 months ago
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆82Updated last week
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆64Updated 3 months ago
- 16-fold memory access reduction with nearly no loss☆105Updated 6 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [NeurIPS '25]☆53Updated 2 weeks ago
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆553Updated last week
- (best/better) practices of megatron on veRL and tuning guide☆93Updated 3 weeks ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆118Updated 6 months ago
- Training library for Megatron-based models☆116Updated this week
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆249Updated 3 months ago
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆32Updated 8 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆193Updated 4 months ago
- Triton implementation of FlashAttention2 that adds Custom Masks.☆138Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆197Updated 4 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆169Updated last year
- ☆96Updated last month
- qwen-nsa☆78Updated 6 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆338Updated 3 months ago