Trustworthy-ML-Lab / ThinkEditLinks
[EMNLP 25] An effective and interpretable weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study uncovering how reasoning length is encoded in the model’s representation space.
☆15Updated last month
Alternatives and similar repositories for ThinkEdit
Users that are interested in ThinkEdit are comparing it to the libraries listed below
Sorting:
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆44Updated 7 months ago
- [NeurIPS 2025] The implementation of paper "On Reasoning Strength Planning in Large Reasoning Models"☆25Updated 4 months ago
- A Sober Look at Language Model Reasoning☆87Updated last month
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆80Updated 4 months ago
- ☆55Updated 4 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 3 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆83Updated 10 months ago
- Localize-and-Stitch: Efficient Model Merging via Sparse Task Arithmetic☆30Updated last month
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆97Updated 8 months ago
- Codebase for decoding compressed trust.☆25Updated last year
- The official repository of NeurIPS'25 paper "Ada-R1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆20Updated last week
- ☆11Updated 3 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated last year
- ☆67Updated 7 months ago
- ☆29Updated 8 months ago
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆20Updated 6 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆86Updated 7 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆87Updated 9 months ago
- official code repo for paper "Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging"☆20Updated last month
- ☆18Updated last year
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆188Updated 4 months ago
- ☆18Updated 6 months ago
- ☆51Updated last year
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆13Updated 8 months ago
- ☆63Updated 8 months ago
- ☆23Updated 11 months ago
- ThinK: Thinner Key Cache by Query-Driven Pruning☆24Updated 9 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆47Updated 11 months ago
- Code for paper: Aligning Large Language Models with Representation Editing: A Control Perspective☆34Updated 9 months ago
- ☆11Updated last year