ShopeeLLM / Spec-RLLinks
SPEC-RL: Accelerating On-Policy Reinforcement Learning via Speculative Rollouts
☆55Updated 2 weeks ago
Alternatives and similar repositories for Spec-RL
Users that are interested in Spec-RL are comparing it to the libraries listed below
Sorting:
- ☆124Updated 6 months ago
- Official Implementation of SAM-Decoding: Speculative Decoding via Suffix Automaton☆39Updated 10 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆214Updated 6 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆52Updated last year
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆137Updated last month
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆119Updated last year
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆112Updated 9 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆281Updated last month
- ☆293Updated 5 months ago
- ☆41Updated 9 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆215Updated 10 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Updated last year
- ☆48Updated 4 months ago
- ☆29Updated 2 months ago
- Multi-Candidate Speculative Decoding☆38Updated last year
- The Official Implementation of Ada-KV [NeurIPS 2025]☆118Updated 3 weeks ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆135Updated last year
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆102Updated 6 months ago
- MiroRL is an MCP-first reinforcement learning framework for deep research agent.☆183Updated 3 months ago
- [NeurIPS 2025] Scaling Speculative Decoding with Lookahead Reasoning☆56Updated last month
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆55Updated last year
- ☆85Updated last month
- ☆49Updated last year
- 🔥 LLM-powered GPU kernel synthesis: Train models to convert PyTorch ops into optimized Triton kernels via SFT+RL. Multi-turn compilation…☆107Updated last month
- A Comprehensive Survey on Long Context Language Modeling☆215Updated 3 weeks ago
- ☆107Updated 3 months ago
- "Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiw…☆31Updated last year
- ☆48Updated 7 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆61Updated 10 months ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆187Updated 5 months ago