Simple next-token-prediction for RLHF
☆229Sep 30, 2023Updated 2 years ago
Alternatives and similar repositories for chain-of-hindsight
Users that are interested in chain-of-hindsight are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆158Mar 18, 2023Updated 3 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,743Jan 8, 2024Updated 2 years ago
- ☆284Jan 6, 2025Updated last year
- [NIPS2023] RRHF & Wombat☆808Sep 22, 2023Updated 2 years ago
- A repository for transformer critique learning and generation☆89Dec 7, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆75Nov 3, 2023Updated 2 years ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆367Dec 29, 2023Updated 2 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆843Jul 1, 2024Updated last year
- Unofficial implementation of Chain of Hindsight (https://arxiv.org/abs/2302.02676) using pytorch and huggingface Trainers.☆11Apr 5, 2023Updated 3 years ago
- ☆72May 22, 2023Updated 2 years ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆117Jun 28, 2025Updated 9 months ago
- Collection of papers for scalable automated alignment.☆93Oct 22, 2024Updated last year
- Code repository for the c-BTM paper☆109Sep 26, 2023Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆170Sep 18, 2025Updated 6 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Code for the paper "Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving"☆19May 25, 2023Updated 2 years ago
- ☆11Sep 19, 2025Updated 6 months ago
- (NeurIPS '22) LISA: Learning Interpretable Skill Abstractions - A framework for unsupervised skill learning using Imitation☆29Feb 22, 2023Updated 3 years ago
- ☆315Jun 9, 2024Updated last year
- A repository of projects and datasets under active development by Alignment Lab AI☆22Dec 22, 2023Updated 2 years ago
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,142Sep 18, 2025Updated 6 months ago
- RewardBench: the first evaluation tool for reward models.☆707Feb 16, 2026Updated last month
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated last year
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆355Jun 18, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Secrets of RLHF in Large Language Models Part I: PPO☆1,420Mar 3, 2024Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Apr 26, 2023Updated 2 years ago
- ☆256Dec 21, 2022Updated 3 years ago
- Self-Supervised Alignment with Mutual Information☆20May 24, 2024Updated last year
- Scaling Data-Constrained Language Models☆342Jun 28, 2025Updated 9 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆182Feb 13, 2024Updated 2 years ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆906Sep 30, 2025Updated 6 months ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆88Sep 12, 2024Updated last year
- Accompanying repo for the RLPrompt paper☆361Jun 6, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,595Nov 24, 2025Updated 4 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,112Jun 1, 2023Updated 2 years ago
- All-in-one repository for Fine-tuning & Pretraining (Large) Language Models☆15Mar 8, 2023Updated 3 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedback☆208May 24, 2023Updated 2 years ago
- A modular RL library to fine-tune language models to human preferences☆2,385Mar 1, 2024Updated 2 years ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆256Oct 31, 2023Updated 2 years ago