wang2226 / Awesome-LLM-DecodingLinks
π Paper list on decoding methods for LLMs and LVLMs
β55Updated last month
Alternatives and similar repositories for Awesome-LLM-Decoding
Users that are interested in Awesome-LLM-Decoding are comparing it to the libraries listed below
Sorting:
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enabβ¦β95Updated 5 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.β142Updated 2 weeks ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"β71Updated 7 months ago
- β52Updated 2 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMsβ171Updated last month
- β65Updated 3 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!β68Updated 4 months ago
- The repo for In-context Autoencoderβ130Updated last year
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..β262Updated 4 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)β114Updated last year
- β117Updated 4 months ago
- β44Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretabilityβ29Updated 4 months ago
- π A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyondβ277Updated last month
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It containsβ¦β241Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β125Updated 4 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleβ58Updated 2 weeks ago
- β47Updated 3 weeks ago
- β155Updated 2 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruningβ86Updated 5 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.β76Updated 2 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024β77Updated 10 months ago
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"β83Updated 3 weeks ago
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Modelsβ39Updated 8 months ago
- A curated list of resources for activation engineeringβ99Updated 2 months ago
- β252Updated last month
- Implementation code for ACL2024οΌAdvancing Parameter Efficiency in Fine-tuning via Representation Editingβ14Updated last year
- awesome SAE papersβ40Updated 2 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modificationsβ81Updated 4 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concenβ¦β75Updated last month