eth-lre / PedagogicalRLLinks
Multi-turn RL framework for aligning models to be tutors instead of answerers. EMNLP 2025 Oral
โ27Updated 3 weeks ago
Alternatives and similar repositories for PedagogicalRL
Users that are interested in PedagogicalRL are comparing it to the libraries listed below
Sorting:
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. ๐งฎโจโ271Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"โ533Updated 11 months ago
- โ52Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.โ674Updated 6 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"โ181Updated 7 months ago
- โ89Updated last year
- โ340Updated 7 months ago
- โ219Updated 9 months ago
- โ71Updated 8 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)โ217Updated 2 years ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.โ328Updated last year
- Paper Reproduction Google SCoRE(Training Language Models to Self-Correct via Reinforcement Learning)โ142Updated last year
- Reproducible, flexible LLM evaluationsโ312Updated last month
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"โ372Updated 2 years ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.โ453Updated last year
- Performant framework for training, analyzing and visualizing Sparse Autoencoders (SAEs) and their frontier variants.โ168Updated this week
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..โ290Updated 2 weeks ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learningโ257Updated 7 months ago
- Critique-out-Loud Reward Modelsโ70Updated last year
- โ12Updated last year
- A Survey on Data Selection for Language Modelsโ254Updated 8 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".โ83Updated 11 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)โ686Updated 11 months ago
- โ1,052Updated 6 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enabโฆโ142Updated 4 months ago
- โ329Updated 7 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied witโฆโ148Updated last year
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?โ48Updated last year
- โ70Updated last year
- โ274Updated 2 years ago