AlphaPav / mem-kk-logicLinks
On Memorization of Large Language Models in Logical Reasoning
β71Updated 5 months ago
Alternatives and similar repositories for mem-kk-logic
Users that are interested in mem-kk-logic are comparing it to the libraries listed below
Sorting:
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β64Updated 10 months ago
- [NeurIPS'24] Official code for *π―DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*β113Updated 9 months ago
- GenRM-CoT: Data release for verification rationalesβ65Updated 10 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architectβ¦β65Updated 3 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluationsβ128Updated 4 months ago
- The official repository of the Omni-MATH benchmark.β87Updated 8 months ago
- [ACL 2024 Findings] MathBench: A Comprehensive Multi-Level Difficulty Mathematics Evaluation Datasetβ106Updated 3 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"β170Updated 3 months ago
- β103Updated 9 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.β80Updated 3 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".β104Updated last month
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied witβ¦β137Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correctβ184Updated 7 months ago
- β209Updated 6 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learningβ110Updated 4 months ago
- Code implementation of synthetic continued pretrainingβ127Updated 8 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".β81Updated 7 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineeringβ61Updated 9 months ago
- RL Scaling and Test-Time Scaling (ICML'25)β113Updated 7 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)β147Updated 6 months ago
- β105Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β129Updated 5 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.β245Updated 4 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Lβ¦β51Updated last year
- β33Updated 11 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Modelsβ185Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleβ60Updated last month
- β205Updated 5 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)β60Updated 10 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)β149Updated 10 months ago