hbin0701 / Self-ExploreLinks
[๐๐๐๐๐ ๐
๐ข๐ง๐๐ข๐ง๐ ๐ฌ ๐๐๐๐ & ๐๐๐ ๐๐๐๐ ๐๐๐๐๐ ๐๐ซ๐๐ฅ] ๐๐ฏ๐ฉ๐ข๐ฏ๐ค๐ช๐ฏ๐จ ๐๐ข๐ต๐ฉ๐ฆ๐ฎ๐ข๐ต๐ช๐ค๐ข๐ญ ๐๐ฆ๐ข๐ด๐ฐ๐ฏ๐ช๐ฏ๐จ ๐ช๐ฏ ๐๐ข๐ฏ๐จ๐ถ๐ข๐จ๐ฆ ๐๐ฐ๐ฅ๐ฆ๐ญ๐ด ๐ธ๐ช๐ต๐ฉ ๐๐ช๐ฏ๐ฆ-๐จ๐ณ๐ข๐ช๐ฏ๐ฆ๐ฅ ๐๐ฆ๐ธ๐ข๐ณ๐ฅ๐ด
โ51Updated last year
Alternatives and similar repositories for Self-Explore
Users that are interested in Self-Explore are comparing it to the libraries listed below
Sorting:
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"โ76Updated 7 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracyโ76Updated 3 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messagesโ52Updated 5 months ago
- โ71Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervisionโ124Updated last year
- โ58Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.โ64Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewardsโ46Updated 9 months ago
- [NeurIPS 2025] Reasoning Models Better Express Their Confidence"โ22Updated last month
- Critique-out-Loud Reward Modelsโ71Updated last year
- [COLM 2025] EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Treesโ31Updated 6 months ago
- Directional Preference Alignmentโ58Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Modelsโ70Updated 10 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learningโ119Updated 8 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Mergingโ116Updated 2 years ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)โ65Updated last year
- official repo for the paper "Learning From Mistakes Makes LLM Better Reasoner"โ60Updated 2 years ago
- GenRM-CoT: Data release for verification rationalesโ68Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".โ83Updated last year
- โ108Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)โ127Updated last year
- โ102Updated 2 years ago
- Function Vectors in Large Language Models (ICLR 2024)โ190Updated 8 months ago
- Self-Alignment with Principle-Following Reward Modelsโ169Updated 3 months ago
- โ52Updated 9 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Modelโ45Updated 3 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or reโฆโ38Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"โ39Updated 2 years ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don'tโฆโ126Updated last year
- PASTA: Post-hoc Attention Steering for LLMsโ132Updated last year