MraDonkey / rethinking_promptingLinks
[ACL 2025 Main] (π Outstanding Paper Award) Rethinking the Role of Prompting Strategies in LLM Test-Time Scaling: A Perspective of Probability Theory
β14Updated 4 months ago
Alternatives and similar repositories for rethinking_prompting
Users that are interested in rethinking_prompting are comparing it to the libraries listed below
Sorting:
- β51Updated last year
- β38Updated 4 months ago
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to β¦β43Updated last week
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"β138Updated last month
- RM-R1: Unleashing the Reasoning Potential of Reward Modelsβ154Updated 5 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β133Updated 8 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".β25Updated 4 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward modelβ¦β59Updated 6 months ago
- [EMNLP 2025] WebAgent-R1: Training Web Agents via End-to-End Multi-Turn Reinforcement Learningβ63Updated last month
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleβ72Updated 4 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.β80Updated last month
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)β64Updated last year
- This is the official implementation of the paper "SΒ²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"β72Updated 7 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Modelsβ63Updated last year
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"β56Updated 3 months ago
- β69Updated 5 months ago
- A Sober Look at Language Model Reasoningβ89Updated 3 weeks ago
- MathFusion: Enhancing Mathematical Problem-solving of LLM through Instruction Fusion (ACL 2025)β35Updated 4 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agentsβ46Updated 5 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.β27Updated 9 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoningβ69Updated 5 months ago
- [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlaβ¦β46Updated 7 months ago
- β25Updated 8 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)β63Updated last year
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large β¦β97Updated 11 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignmentβ16Updated 11 months ago
- Official Implementation for the paper "Integrative Decoding: Improving Factuality via Implicit Self-consistency"β32Updated 8 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."β49Updated last year
- [AAAI 2026] Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".β92Updated last month
- This the implementation of LeCoβ31Updated 10 months ago