MraDonkey / rethinking_promptingLinks
[ACL 2025 Main] (π Outstanding Paper Award) Rethinking the Role of Prompting Strategies in LLM Test-Time Scaling: A Perspective of Probability Theory
β14Updated 4 months ago
Alternatives and similar repositories for rethinking_prompting
Users that are interested in rethinking_prompting are comparing it to the libraries listed below
Sorting:
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"β61Updated 3 months ago
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to β¦β53Updated 2 weeks ago
- β51Updated last year
- β41Updated 4 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"β146Updated 2 months ago
- β16Updated last year
- A Sober Look at Language Model Reasoningβ92Updated last month
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoningβ70Updated 5 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignmentβ16Updated last year
- RM-R1: Unleashing the Reasoning Potential of Reward Modelsβ156Updated 6 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleβ73Updated 5 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.β82Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β134Updated 9 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".β25Updated 5 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward modelβ¦β60Updated 6 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agentsβ46Updated 6 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)β64Updated last year
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compressionβ127Updated 8 months ago
- Reproducing R1 for Code with Reliable Rewardsβ12Updated 9 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language β¦β161Updated 7 months ago
- MathFusion: Enhancing Mathematical Problem-solving of LLM through Instruction Fusion (ACL 2025)β35Updated 5 months ago
- β24Updated 9 months ago
- [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlaβ¦β47Updated 7 months ago
- β19Updated 3 months ago
- This is the official implementation of the paper "SΒ²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"β72Updated 8 months ago
- Official Implementation for the paper "Integrative Decoding: Improving Factuality via Implicit Self-consistency"β32Updated 8 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shapingβ62Updated 7 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Modelsβ64Updated last year
- β53Updated 10 months ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.β28Updated 10 months ago