MraDonkey / rethinking_promptingLinks
[ACL 2025 Main] (π Outstanding Paper Award) Rethinking the Role of Prompting Strategies in LLM Test-Time Scaling: A Perspective of Probability Theory
β15Updated 5 months ago
Alternatives and similar repositories for rethinking_prompting
Users that are interested in rethinking_prompting are comparing it to the libraries listed below
Sorting:
- β51Updated last year
- Resources and paper list for 'Scaling Environments for Agents'. This repository accompanies our survey on how environments contribute to β¦β57Updated this week
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"β155Updated 3 months ago
- β43Updated 5 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.β134Updated 10 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)β65Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Modelsβ64Updated last year
- A Sober Look at Language Model Reasoningβ92Updated 2 months ago
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward modelβ¦β60Updated 7 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".β25Updated 5 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"β39Updated 2 years ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.β83Updated 2 months ago
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"β62Updated 4 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoningβ71Updated 6 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleβ73Updated 6 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Modelsβ156Updated 7 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."β51Updated last year
- β25Updated 9 months ago
- [EMNLP 2025] WebAgent-R1: Training Web Agents via End-to-End Multi-Turn Reinforcement Learningβ69Updated 2 months ago
- β54Updated 10 months ago
- Reproducing R1 for Code with Reliable Rewardsβ12Updated 9 months ago
- A curated list of awesome LLM Inference-Time Self-Improvement (ITSI, pronounced "itsy") papers from our recent survey: A Survey on Large β¦β99Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignmentβ16Updated last year
- β70Updated 7 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)β62Updated last year
- [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correlaβ¦β47Updated 8 months ago
- Discriminative Constrained Optimization for Reinforcing Large Reasoning Modelsβ50Updated 2 months ago
- This the implementation of LeCoβ31Updated last year
- β23Updated 3 months ago
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agentsβ47Updated 6 months ago