sail-sg / feedback-conditional-policyLinks
Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"
☆55Updated 2 months ago
Alternatives and similar repositories for feedback-conditional-policy
Users that are interested in feedback-conditional-policy are comparing it to the libraries listed below
Sorting:
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆50Updated 5 months ago
- ☆45Updated 2 months ago
- ☆53Updated 2 months ago
- ☆68Updated 6 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆70Updated 5 months ago
- Reinforcing General Reasoning without Verifiers☆92Updated 6 months ago
- A Sober Look at Language Model Reasoning☆92Updated last month
- ☆17Updated 4 months ago
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year
- Code for "Variational Reasoning for Language Models"☆53Updated 2 months ago
- ☆51Updated 10 months ago
- Emergent Hierarchical Reasoning in LLMs/VLMs through Reinforcement Learning☆53Updated 2 months ago
- The official repository of NeurIPS'25 paper "Ada-R1: From Long-Cot to Hybrid-CoT via Bi-Level Adaptive Reasoning Optimization"☆20Updated last month
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆126Updated 8 months ago
- A Recipe for Building LLM Reasoners to Solve Complex Instructions☆29Updated 2 months ago
- Code for Heima☆58Updated 8 months ago
- Code for Evolving Language Models without Labels: Majority Drives Selection, Novelty Promotes Variation (EVOL-RL).☆41Updated 2 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆49Updated 6 months ago
- ☆19Updated 8 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆86Updated 9 months ago
- ☆51Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆86Updated 7 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆148Updated 5 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆72Updated 8 months ago
- SIFT: Grounding LLM Reasoning in Contexts via Stickers☆57Updated 9 months ago
- ☆23Updated last year
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆25Updated 4 months ago
- ☆21Updated last year
- Code for "Reasoning to Learn from Latent Thoughts"☆123Updated 8 months ago
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆18Updated last year