raybears / cot-transparencyLinks
Improving transparency of large language models' reasoning
☆14Updated 2 months ago
Alternatives and similar repositories for cot-transparency
Users that are interested in cot-transparency are comparing it to the libraries listed below
Sorting:
- ☆23Updated last year
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆41Updated 2 years ago
- The official implementation of Self-Exploring Language Models (SELM)☆63Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- Resa: Transparent Reasoning Models via SAEs☆47Updated 4 months ago
- ☆33Updated last year
- Exploration of automated dataset selection approaches at large scales.☆52Updated 11 months ago
- ☆33Updated 7 months ago
- ☆99Updated last year
- ☆123Updated 11 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Challenge LLMs to Reason About Reasoning: A Benchmark to Unveil Cognitive Depth in LLMs☆51Updated last year
- ☆19Updated 6 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- Reinforcing General Reasoning without Verifiers☆96Updated 7 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆126Updated last year
- ☆130Updated last year
- ☆74Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated 2 years ago
- ☆34Updated 11 months ago
- ☆23Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Official repository for Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning [ICLR 2025]☆50Updated last year
- Official implementation for "Law of the Weakest Link: Cross capabilities of Large Language Models"☆43Updated last year
- Sotopia-RL: Reward Design for Social Intelligence☆46Updated last week
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆120Updated last week
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆72Updated 11 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆102Updated 2 years ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Updated 9 months ago
- Replicating O1 inference-time scaling laws☆93Updated last year