eth-lre / PedagogicalRLLinks
Multi-turn RL framework for aligning models to be tutors instead of answerers. EMNLP 2025 Oral
โ26Updated 3 weeks ago
Alternatives and similar repositories for PedagogicalRL
Users that are interested in PedagogicalRL are comparing it to the libraries listed below
Sorting:
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. ๐งฎโจโ270Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"โ179Updated 6 months ago
- โ12Updated last year
- โ52Updated 9 months ago
- โ70Updated 8 months ago
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generationโ33Updated 2 months ago
- โ69Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".โ82Updated 11 months ago
- Paper Reproduction Google SCoRE(Training Language Models to Self-Correct via Reinforcement Learning)โ142Updated last year
- โ89Updated 11 months ago
- RewardBench: the first evaluation tool for reward models.โ667Updated 6 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.โ326Updated last year
- โ341Updated 6 months ago
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?โ46Updated last year
- โ217Updated 8 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracyโ76Updated 2 months ago
- Critique-out-Loud Reward Modelsโ70Updated last year
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".โ56Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)โ125Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialoguesโ130Updated last year
- Performant framework for training, analyzing and visualizing Sparse Autoencoders (SAEs) and their frontier variants.โ168Updated this week
- [AAAI 2025] Assessing the Creativity of LLMs in Proposing Novel Solutions to Mathematical Problemsโ12Updated 7 months ago
- Official code for the ACL 2024 paper: Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New โฆโ57Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.โ133Updated 8 months ago
- [ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrievalโ179Updated 3 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)โ218Updated 2 years ago
- Code for Research Project TLDRโ24Updated 4 months ago
- Evaluation utilities based on SymPy.โ21Updated last year
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enabโฆโ135Updated 4 months ago
- Reproducible, flexible LLM evaluationsโ301Updated 3 weeks ago