Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Language Models in Simulated Human Society".
☆354Jun 18, 2023Updated 2 years ago
Alternatives and similar repositories for Stable-Alignment
Users that are interested in Stable-Alignment are comparing it to the libraries listed below
Sorting:
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedback☆208May 24, 2023Updated 2 years ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,585Nov 24, 2025Updated 3 months ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- [NIPS2023] RRHF & Wombat☆809Sep 22, 2023Updated 2 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,816Jun 17, 2025Updated 8 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,766Aug 4, 2024Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,416Mar 3, 2024Updated last year
- Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memo…☆638Jun 5, 2023Updated 2 years ago
- ChatArena (or Chat Arena) is a Multi-Agent Language Game Environments for LLMs. The goal is to develop communication and collaboration ca…☆1,540Aug 11, 2025Updated 6 months ago
- This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"☆209Oct 11, 2023Updated 2 years ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,741Jan 8, 2024Updated 2 years ago
- ☆282Jan 6, 2025Updated last year
- Dromedary: towards helpful, ethical and reliable LLMs.☆1,144Sep 18, 2025Updated 5 months ago
- Code for RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs. ACL 2023.☆64Nov 27, 2024Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,091Jun 1, 2023Updated 2 years ago
- A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)☆3,166Feb 8, 2026Updated 2 weeks ago
- A modular RL library to fine-tune language models to human preferences☆2,378Mar 1, 2024Updated last year
- Paper List for In-context Learning 🌷☆875Oct 8, 2024Updated last year
- Aligning pretrained language models with instruction data generated by themselves.☆4,576Mar 27, 2023Updated 2 years ago
- Generative Judge for Evaluating Alignment☆250Jan 18, 2024Updated 2 years ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674☆195Jun 14, 2023Updated 2 years ago
- RewardBench: the first evaluation tool for reward models.☆696Feb 16, 2026Updated last week
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,611Aug 30, 2023Updated 2 years ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Feb 21, 2026Updated last week
- ☆921May 22, 2024Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆782Oct 4, 2024Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Sep 18, 2025Updated 5 months ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,855Aug 11, 2024Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆199Dec 16, 2023Updated 2 years ago
- Momentum Decoding: Open-ended Text Generation as Graph Exploration☆19Jan 27, 2023Updated 3 years ago
- Recipes to train reward model for RLHF.☆1,515Apr 24, 2025Updated 10 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆548Jun 25, 2024Updated last year
- LOMO: LOw-Memory Optimization☆988Jul 2, 2024Updated last year
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,477Oct 31, 2023Updated 2 years ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,798Dec 12, 2023Updated 2 years ago
- Instruction Tuning with GPT-4☆4,342Jun 11, 2023Updated 2 years ago
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,301Dec 9, 2025Updated 2 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,076Sep 27, 2025Updated 5 months ago
- Unofficial implementation of Chain of Hindsight (https://arxiv.org/abs/2302.02676) using pytorch and huggingface Trainers.☆11Apr 5, 2023Updated 2 years ago