thu-nics / R2RLinks
[NeurIPS'25] The official code implementation for paper "R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Token Routing"
☆51Updated 2 weeks ago
Alternatives and similar repositories for R2R
Users that are interested in R2R are comparing it to the libraries listed below
Sorting:
- [NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models☆106Updated 4 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆57Updated 8 months ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆58Updated 3 months ago
- [ICML 2025] This is the official PyTorch implementation of "ZipAR: Accelerating Auto-regressive Image Generation through Spatial Locality…☆53Updated 6 months ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆64Updated 7 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆164Updated last month
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆147Updated 3 months ago
- Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation☆71Updated 3 months ago
- ☆96Updated last month
- ☆61Updated 3 months ago
- A curated list of recent papers on efficient video attention for video diffusion models, including sparsification, quantization, and cach…☆41Updated last month
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆236Updated 3 months ago
- SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse–Linear Attention☆78Updated this week
- [NeurIPS 2024] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching☆116Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆96Updated 9 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆94Updated 10 months ago
- M1: Towards Scalable Test-Time Compute with Mamba Reasoning Models☆42Updated 3 months ago
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆103Updated 3 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆119Updated 3 months ago
- [NeurIPS 2024] The official implementation of ZipCache: Accurate and Efficient KV Cache Quantization with Salient Token Identification☆29Updated 6 months ago
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆55Updated 3 weeks ago
- Implementation of Negative-aware Finetuning (NFT) algorithm for "Bridging Supervised Learning and Reinforcement Learning in Math Reasonin…☆43Updated last month
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆26Updated 2 months ago
- ☆91Updated 7 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- [ICML 2024] SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large Language Models☆21Updated last year
- ✈️ [ICCV 2025] Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints☆75Updated 3 months ago
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆14Updated last year
- VideoNSA: Native Sparse Attention Scales Video Understanding☆44Updated last week
- [NeurIPS 2025] ScaleKV: Memory-Efficient Visual Autoregressive Modeling with Scale-Aware KV Cache Compression☆49Updated 4 months ago