eth-lre / mathtutorbenchLinks
Benchmark for Measuring Open-ended Pedagogical Capabilities of LLM Tutors, EMNLP 2025 Oral
☆28Updated last month
Alternatives and similar repositories for mathtutorbench
Users that are interested in mathtutorbench are comparing it to the libraries listed below
Sorting:
- Multi-turn RL framework for aligning models to be tutors instead of answerers. EMNLP 2025 Oral☆27Updated 3 weeks ago
- This repository hosts the paper “LLM Based Math Tutoring: Challenges and Dataset”, along with the accompanying dataset. It explores the p…☆54Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆540Updated last year
- 🧮 MathDial: A Dialog Tutoring Dataset with Rich Pedagogical Properties Grounded in Math Reasoning Problems, EMNLP Findings 2023☆72Updated 3 months ago
- RewardBench: the first evaluation tool for reward models.☆674Updated 6 months ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆562Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆533Updated 11 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆217Updated 2 years ago
- Prod Env☆436Updated 2 years ago
- This repository contains a collection of papers and resources on Reasoning in Large Language Models.☆565Updated 2 years ago
- Codes for papers on Large Language Models Personalization (LaMP)☆180Updated 10 months ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆767Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Updated last year
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆204Updated 8 months ago
- ☆281Updated 11 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆563Updated 11 months ago
- [ACL 2023] Reasoning with Language Model Prompting: A Survey☆992Updated 7 months ago
- ☆340Updated 6 months ago
- Data and Code for Program of Thoughts [TMLR 2023]☆302Updated last year
- An Analytical Evaluation Board of Multi-turn LLM Agents [NeurIPS 2024 Oral]☆380Updated last year
- The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.☆791Updated last year
- ICML 2024: Improving Factuality and Reasoning in Language Models through Multiagent Debate☆500Updated 8 months ago
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆505Updated last year
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆540Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆511Updated last year
- Generative Judge for Evaluating Alignment☆248Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆898Updated 3 months ago
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆381Updated last month
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆327Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆385Updated 2 years ago