ZubinGou / math-evaluation-harness
A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. ๐งฎโจ
โ196Updated 11 months ago
Alternatives and similar repositories for math-evaluation-harness:
Users that are interested in math-evaluation-harness are comparing it to the libraries listed below
- โ182Updated last month
- โ325Updated 2 months ago
- โ148Updated 3 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied witโฆโ123Updated 9 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"โ174Updated last month
- Repo of paper "Free Process Rewards without Process Labels"โ140Updated last month
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsโ255Updated 7 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.โ218Updated last week
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.โ301Updated 8 months ago
- โ272Updated 3 weeks ago
- [NeurIPS'24] Official code for *๐ฏDART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*โ101Updated 4 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factualityโ181Updated 8 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learningโ175Updated 3 weeks ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMsโ114Updated last month
- Reference implementation for Token-level Direct Preference Optimization(TDPO)โ133Updated 2 months ago
- โ265Updated 8 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".โ89Updated last month
- Codes for the paper "โBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718โ316Updated 6 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It containsโฆโ182Updated last week
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuningโ427Updated 5 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningโ147Updated 7 months ago
- ๐ A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, and Beyondโ152Updated this week
- โ65Updated last year
- [ACL 2024] Long-Context Language Modeling with Parallel Encodingsโ153Updated 10 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMsโ249Updated 3 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"โ358Updated 2 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"โ234Updated last month
- โ49Updated last month
- A Survey on Efficient Reasoning for LLMsโ281Updated last week
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".โ74Updated 3 months ago