daje0601 / Google_SCoReLinks
Paper Reproduction Google SCoRE(Training Language Models to Self-Correct via Reinforcement Learning)
☆142Updated last year
Alternatives and similar repositories for Google_SCoRe
Users that are interested in Google_SCoRe are comparing it to the libraries listed below
Sorting:
- Official implementation of "OffsetBias: Leveraging Debiased Data for Tuning Evaluators"☆25Updated last year
- ☆29Updated 7 months ago
- [NeurIPS 2025] Reasoning Models Better Express Their Confidence"☆22Updated last month
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆51Updated last year
- [ACL 2024] LangBridge: Multilingual Reasoning Without Multilingual Supervision☆95Updated last year
- Critique-out-Loud Reward Models☆70Updated last year
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆86Updated 9 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated 11 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated 2 months ago
- evolve llm training instruction, from english instruction to any language.☆119Updated 2 years ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆115Updated 5 months ago
- Official Code Repository for the paper "Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-intensive Tasks…☆42Updated last year
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆38Updated 5 months ago
- ☆12Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 9 months ago
- ☆70Updated last year
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆114Updated 5 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆180Updated 6 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- The repo for In-context Autoencoder☆161Updated last year
- ☆166Updated 2 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆126Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 9 months ago
- [ACL 2025] DICE-BENCH: Evaluating the Tool-Use Capabilities of Large Language Models in Multi-Round, Multi-Party Dialogues☆26Updated 5 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆181Updated 7 months ago
- ☆219Updated 9 months ago
- ☆138Updated 9 months ago
- A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)☆193Updated 5 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆116Updated 2 years ago
- Direct Preference Optimization from scratch in PyTorch☆123Updated 9 months ago