sail-sg / ActivePRMLinks
☆20Updated 9 months ago
Alternatives and similar repositories for ActivePRM
Users that are interested in ActivePRM are comparing it to the libraries listed below
Sorting:
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆24Updated 4 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆26Updated last year
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆51Updated 6 months ago
- Extending context length of visual language models☆12Updated last year
- A Sober Look at Language Model Reasoning☆92Updated 2 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆77Updated 4 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆70Updated 6 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆39Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 10 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆47Updated 9 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆109Updated 3 months ago
- [ICLR2026] Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆62Updated 8 months ago
- ☆46Updated 4 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- ☆58Updated last year
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆59Updated last month
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆18Updated last year
- instruction-following benchmark for large reasoning models☆44Updated 6 months ago
- ☆13Updated last year
- ☆51Updated last year
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆51Updated 10 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year
- Reproducing R1 for Code with Reliable Rewards☆12Updated 10 months ago
- ArcherCodeR is an open-source initiative enhancing code reasoning in large language models through scalable, rule-governed reinforcement …☆44Updated 6 months ago
- The official repository of the Omni-MATH benchmark.☆93Updated last year
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Updated 9 months ago
- ☆23Updated 3 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24Updated 8 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆50Updated last year
- GenRM-CoT: Data release for verification rationales☆67Updated last year