Luowaterbi / TokenRecyclingLinks
[ACL2025 Oral🔥]Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
☆21Updated 2 months ago
Alternatives and similar repositories for TokenRecycling
Users that are interested in TokenRecycling are comparing it to the libraries listed below
Sorting:
- ☆49Updated last year
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆214Updated 11 months ago
- A Comprehensive Survey on Long Context Language Modeling☆219Updated 2 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆245Updated 4 months ago
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆357Updated 9 months ago
- [ICLR 2025🔥] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models☆27Updated 6 months ago
- ☆302Updated 6 months ago
- a survey of long-context LLMs from four perspectives, architecture, infrastructure, training, and evaluation☆61Updated 10 months ago
- Bridge Megatron-Core to Hugging Face/Reinforcement Learning☆188Updated this week
- Official Implementation of "Learning Harmonized Representations for Speculative Sampling" (HASS)☆52Updated 10 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆148Updated last month
- ☆73Updated 9 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆200Updated 2 months ago
- ☆29Updated 3 months ago
- Reproducing R1 for Code with Reliable Rewards☆282Updated 8 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆370Updated last year
- Repository of LV-Eval Benchmark☆73Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆138Updated last year
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆187Updated 4 months ago
- [ACL 2025 main] FR-Spec: Frequency-Ranked Speculative Sampling☆49Updated 6 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆273Updated last year
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆288Updated 2 months ago
- official code for GliDe with a CaPE☆20Updated last year
- Evaluation utilities based on SymPy.☆21Updated last year
- Multi-Candidate Speculative Decoding☆39Updated last year
- [ICLR2025] Code and data for paper: Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasonin…☆40Updated 10 months ago
- ☆129Updated 7 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆98Updated 3 months ago
- Heuristic filtering framework for RefineCode☆82Updated 10 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆258Updated 5 months ago