wang2226 / Awesome-LLM-DecodingLinks
π Paper list on decoding methods for LLMs and LVLMs
β58Updated 3 months ago
Alternatives and similar repositories for Awesome-LLM-Decoding
Users that are interested in Awesome-LLM-Decoding are comparing it to the libraries listed below
Sorting:
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.β159Updated last week
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"β80Updated 9 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!β69Updated 6 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruningβ90Updated 7 months ago
- β51Updated 2 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMsβ182Updated 3 months ago
- β67Updated 5 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enabβ¦β111Updated last month
- β53Updated 4 months ago
- π A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyondβ301Updated last week
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)β120Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Freeβ42Updated 6 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.β84Updated 4 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuningβ86Updated 7 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β128Updated 6 months ago
- β167Updated 4 months ago
- β132Updated 3 weeks ago
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improvingβ20Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learningβ256Updated 4 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concenβ¦β80Updated 3 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleβ62Updated 2 months ago
- Implementation code for ACL2024οΌAdvancing Parameter Efficiency in Fine-tuning via Representation Editingβ14Updated last year
- The repo for In-context Autoencoderβ143Updated last year
- The implementation of paper "On Reasoning Strength Planning in Large Reasoning Models"β24Updated 3 months ago
- β63Updated 7 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretabilityβ29Updated 6 months ago
- Official repository for "CODI: Compressing Chain-of-Thought into Continuous Space via Self-Distillation"β25Updated last month
- This is the repository of DEER, a Dynamic Early Exit in Reasoning method for Large Reasoning Language Models.β170Updated 3 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository aggβ¦β139Updated 2 months ago
- β127Updated 6 months ago