guyuntian / CoT_benchmarkLinks
Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"
☆20Updated 2 years ago
Alternatives and similar repositories for CoT_benchmark
Users that are interested in CoT_benchmark are comparing it to the libraries listed below
Sorting:
- A Sober Look at Language Model Reasoning☆87Updated last month
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆44Updated 3 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆122Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- GenRM-CoT: Data release for verification rationales☆67Updated last year
- Code for paper "SPG Sandwiched Policy Gradient for Masked Diffusion Language Models"☆27Updated last week
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆79Updated 2 years ago
- ☆98Updated 2 years ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 3 months ago
- ☆51Updated last year
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆47Updated 3 months ago
- ☆19Updated 6 months ago
- ☆179Updated 5 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆57Updated last year
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆119Updated last week
- ☆46Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆44Updated 7 months ago
- ☆41Updated 2 years ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆91Updated last year
- Extending context length of visual language models☆12Updated 10 months ago
- ☆29Updated last year
- ☆76Updated 11 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆132Updated 4 months ago
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆29Updated last year
- Directional Preference Alignment☆57Updated last year
- The official implementation of "ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization…☆16Updated last year
- Test-time-training on nearest neighbors for large language models☆46Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆51Updated last year