Trustworthy-ML-Lab / ThinkEditLinks
An effective weight-editing method for mitigating overly short reasoning in LLMs, and a mechanistic study uncovering how reasoning length is encoded in the model’s representation space.
☆12Updated last week
Alternatives and similar repositories for ThinkEdit
Users that are interested in ThinkEdit are comparing it to the libraries listed below
Sorting:
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆30Updated 3 months ago
- A Sober Look at Language Model Reasoning☆75Updated last month
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆85Updated 4 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆164Updated 2 weeks ago
- ☆15Updated 10 months ago
- PoC for "SpecReason: Fast and Accurate Inference-Time Compute via Speculative Reasoning" [arXiv '25]☆41Updated 2 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆68Updated 6 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆54Updated 3 months ago
- ☆65Updated 3 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆74Updated 3 weeks ago
- Implementation of CoLA: Compute-Efficient Pre-Training of LLMs via Low-Rank Activation☆23Updated 4 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆45Updated 8 months ago
- Official implementation for "Mixture of In-Context Experts Enhance LLMs’ Awareness of Long Contexts" (Accepted by Neurips2024)☆12Updated 6 months ago
- ☆13Updated last year
- 📜 Paper list on decoding methods for LLMs and LVLMs☆52Updated 2 weeks ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decomposition☆13Updated 3 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆127Updated 3 weeks ago
- ☆18Updated 7 months ago
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆14Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 4 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆30Updated 2 weeks ago
- Test-time-training on nearest neighbors for large language models☆44Updated last year
- ☆49Updated last year
- ☆10Updated last year
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆76Updated 5 months ago
- [ICLR 2025🔥] D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models☆18Updated last week
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 6 months ago
- ☆36Updated last week
- Model merging is a highly efficient approach for long-to-short reasoning.☆71Updated last month
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆105Updated last week