guyuntian / CoT_benchmarkLinks
Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"
☆21Updated 2 years ago
Alternatives and similar repositories for CoT_benchmark
Users that are interested in CoT_benchmark are comparing it to the libraries listed below
Sorting:
- Discriminative Constrained Optimization for Reinforcing Large Reasoning Models☆49Updated 2 months ago
- A Sober Look at Language Model Reasoning☆92Updated last month
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆44Updated 5 months ago
- ☆51Updated 2 years ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆150Updated 2 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 9 months ago
- ☆46Updated last year
- ☆41Updated 2 years ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- GenRM-CoT: Data release for verification rationales☆68Updated last year
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆38Updated last year
- Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts☆16Updated last year
- ☆19Updated 8 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 9 months ago
- ☆103Updated 2 years ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆94Updated last year
- Preparing for ML Interviews.☆52Updated last month
- ☆102Updated 2 years ago
- ☆29Updated last year
- Directional Preference Alignment☆58Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆151Updated 6 months ago
- ☆52Updated 9 months ago
- Test-time-training on nearest neighbors for large language models☆49Updated last year
- Code for "Variational Reasoning for Language Models"☆54Updated 3 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 9 months ago
- Extending context length of visual language models☆12Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year