FuRuF-11 / AITLinks
A repository to introduce the algorithmic information theory. You could learn what is Kolmogorov complexity and why it is important here.
☆11Updated 4 months ago
Alternatives and similar repositories for AIT
Users that are interested in AIT are comparing it to the libraries listed below
Sorting:
- Code for the paper LEGO-Prover: Neural Theorem Proving with Growing Libraries☆68Updated last year
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆43Updated 9 months ago
- Rewarded soups official implementation☆62Updated 2 years ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆182Updated 6 months ago
- Ideas for projects related to Tinker☆112Updated last month
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 8 months ago
- ☆54Updated last year
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆37Updated last year
- Implementation of ICLR 2025 paper "Q-Adapter: Customizing Pre-trained LLMs to New Preferences with Forgetting Mitigation"☆18Updated last year
- ☆51Updated 2 years ago
- Implementation for NeurIPS 2024 oral paper: Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation☆16Updated 10 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- ☆106Updated last year
- GenRM-CoT: Data release for verification rationales☆66Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆198Updated 7 months ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- ☆76Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆93Updated last year
- Minimal RLHF implementation built on top of minGPT.☆30Updated last year
- A Telegram bot to recommend arXiv papers☆289Updated last month
- OptiBench and ReSocratic Synthesis Method☆28Updated 2 months ago
- Natural Language Reinforcement Learning☆100Updated 4 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆148Updated 9 months ago
- ☆34Updated last year
- ☆106Updated last year
- ☆26Updated last year
- ☆33Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆199Updated last year
- A brief and partial summary of RLHF algorithms.☆139Updated 9 months ago