tengxiaoliu / LM_skipLinks
[NeurIPS 2024] Can Language Models Learn to Skip Steps?
☆22Updated last year
Alternatives and similar repositories for LM_skip
Users that are interested in LM_skip are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆98Updated 3 months ago
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆58Updated last year
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Updated last year
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Updated last year
- ☆71Updated last year
- Code for Research Project TLDR☆25Updated 5 months ago
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Fee…☆37Updated last year
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated 3 months ago
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improving☆24Updated 5 months ago
- The official repository of the Omni-MATH benchmark.☆93Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆119Updated 7 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆99Updated 11 months ago
- ☆78Updated last year
- ☆73Updated 9 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆137Updated last year
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets and…☆63Updated 8 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- Code and data for "Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning?" (ACL 2024)☆32Updated last year
- ☆56Updated 3 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- ☆17Updated last year
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆62Updated 8 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated last year
- ☆58Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆62Updated 4 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆87Updated 11 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆53Updated last year