tengxiaoliu / LM_skipLinks
[NeurIPS 2024] Can Language Models Learn to Skip Steps?
☆20Updated 10 months ago
Alternatives and similar repositories for LM_skip
Users that are interested in LM_skip are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆82Updated 10 months ago
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆58Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆117Updated 5 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆92Updated last month
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets and…☆60Updated 6 months ago
- The implementation of paper "LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Fee…☆37Updated last year
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆41Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆130Updated last year
- ☆68Updated 7 months ago
- The official repository of the Omni-MATH benchmark.☆88Updated 11 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated last month
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated last year
- ☆76Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆179Updated 6 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆97Updated 9 months ago
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improving☆24Updated 3 months ago
- ☆68Updated last year
- ☆57Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆84Updated last year
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMs☆44Updated last year
- ☆17Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- a survey of long-context LLMs from four perspectives, architecture, infrastructure, training, and evaluation☆60Updated 8 months ago
- The official GitHub repository of the paper "Recent advances in large langauge model benchmarks against data contamination: From static t…☆47Updated 2 months ago
- Collection of papers for scalable automated alignment.☆94Updated last year
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆60Updated 6 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆63Updated last year
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆84Updated 9 months ago
- ☆46Updated 8 months ago