Chengsong-Huang / Self-CalibrationLinks
codes for Efficient Test-Time Scaling via Self-Calibration
☆14Updated 3 months ago
Alternatives and similar repositories for Self-Calibration
Users that are interested in Self-Calibration are comparing it to the libraries listed below
Sorting:
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 4 months ago
- ☆15Updated 6 months ago
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆28Updated 3 weeks ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆33Updated last month
- ☆109Updated 3 months ago
- ☆24Updated 2 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆18Updated last week
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆54Updated 6 months ago
- ☆64Updated last month
- ☆18Updated last month
- ☆46Updated 8 months ago
- A Survey on the Honesty of Large Language Models☆57Updated 6 months ago
- A Sober Look at Language Model Reasoning☆74Updated last week
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆46Updated 3 weeks ago
- Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models?☆24Updated 3 months ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆25Updated last week
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆38Updated last month
- Model merging is a highly efficient approach for long-to-short reasoning.☆65Updated 3 weeks ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆69Updated last month
- ☆19Updated last month
- This is a unified platform for implementing and evaluating test-time reasoning mechanisms in Large Language Models (LLMs).☆19Updated 5 months ago
- [ACL'25] Mosaic-IT: Cost-Free Compositional Data Synthesis for Instruction Tuning☆19Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆73Updated 4 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆28Updated 2 months ago
- Pytorch implementation of Tree Preference Optimization (TPO) (Accepyed by ICLR'25)☆19Updated 2 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆88Updated 8 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆73Updated last week
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆41Updated 11 months ago
- ☆22Updated 11 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆35Updated 5 months ago