hbin0701 / Self-ExploreLinks
[๐๐๐๐๐ ๐
๐ข๐ง๐๐ข๐ง๐ ๐ฌ ๐๐๐๐ & ๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐ซ๐๐ฅ] ๐๐ฏ๐ฉ๐ข๐ฏ๐ค๐ช๐ฏ๐จ ๐๐ข๐ต๐ฉ๐ฆ๐ฎ๐ข๐ต๐ช๐ค๐ข๐ญ ๐๐ฆ๐ข๐ด๐ฐ๐ฏ๐ช๐ฏ๐จ ๐ช๐ฏ ๐๐ข๐ฏ๐จ๐ถ๐ข๐จ๐ฆ ๐๐ฐ๐ฅ๐ฆ๐ญ๐ด ๐ธ๐ช๐ต๐ฉ ๐๐ช๐ฏ๐ฆ-๐จ๐ณ๐ข๐ช๐ฏ๐ฆ๐ฅ ๐๐ฆ๐ธ๐ข๐ณ๐ฅ๐ด
โ51Updated last year
Alternatives and similar repositories for Self-Explore
Users that are interested in Self-Explore are comparing it to the libraries listed below
Sorting:
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.โ62Updated 11 months ago
- โ59Updated 9 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewardsโ44Updated 2 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"โ73Updated last month
- โ28Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracyโ63Updated 6 months ago
- โ67Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)โ57Updated 8 months ago
- Directional Preference Alignmentโ57Updated 9 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"โ38Updated last year
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"โ59Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"โ69Updated last year
- Critique-out-Loud Reward Modelsโ66Updated 8 months ago
- BeHonest: Benchmarking Honesty in Large Language Modelsโ34Updated 10 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messagesโ48Updated 6 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or reโฆโ32Updated 9 months ago
- โ44Updated 9 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignmentโ69Updated last year
- Revisiting Mid-training in the Era of RL Scalingโ56Updated 2 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"โ38Updated last year
- โ13Updated 11 months ago
- Evaluate the Quality of Critiqueโ35Updated last year
- โ51Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervisionโ120Updated 9 months ago
- โ46Updated 7 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".โ78Updated 5 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)โ112Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Styleโ48Updated last month
- โ40Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Modelsโ57Updated 3 months ago