Trustworthy-ML-Lab / ThinkEditLinks
An effective weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study uncovering how reasoning length is encoded in the model’s representation space.
☆12Updated this week
Alternatives and similar repositories for ThinkEdit
Users that are interested in ThinkEdit are comparing it to the libraries listed below
Sorting:
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆40Updated 4 months ago
- The implementation of paper "On Reasoning Strength Planning in Large Reasoning Models"☆21Updated last month
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆86Updated 6 months ago
- A Sober Look at Language Model Reasoning☆81Updated 2 months ago
- ☆20Updated 9 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆27Updated 7 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆173Updated last month
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆47Updated 10 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆77Updated 2 months ago
- ☆10Updated last year
- ThinK: Thinner Key Cache by Query-Driven Pruning☆23Updated 6 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆80Updated 2 months ago
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆15Updated last year
- ☆66Updated 4 months ago
- The official repository of paper "AdaR1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆18Updated 3 months ago
- ☆49Updated last month
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆83Updated 6 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆75Updated 8 months ago
- ☆15Updated last year
- Codebase for decoding compressed trust.☆24Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆36Updated last month
- 📜 Paper list on decoding methods for LLMs and LVLMs☆55Updated last month
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆20Updated 4 months ago
- ☆51Updated last year
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆153Updated last week
- ☆14Updated last year
- ☆17Updated 3 months ago
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆23Updated 6 months ago
- Test-time-training on nearest neighbors for large language models☆45Updated last year
- ☆46Updated last year