agi-templar / Stable-Alignment
Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Language Models in Simulated Human Society".
☆344Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Stable-Alignment
- A large-scale, fine-grained, diverse preference dataset (and models).☆315Updated 10 months ago
- ☆259Updated 11 months ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆782Updated 4 months ago
- FireAct: Toward Language Agent Fine-tuning☆255Updated last year
- Generative Judge for Evaluating Alignment☆217Updated 10 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆219Updated 2 months ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆235Updated 7 months ago
- Paper List for a new paradigm of NLP: Interactive NLP (https://arxiv.org/abs/2305.13246)☆213Updated last year
- This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"☆201Updated last year
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" (ICLR 2024)☆332Updated 2 months ago
- papers related to LLM-agent that published on top conferences☆305Updated 9 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆199Updated 3 months ago
- ☆252Updated last month
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆426Updated 4 months ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆240Updated last year
- ☆221Updated last year
- Official implementation of paper "Cumulative Reasoning With Large Language Models" (https://arxiv.org/abs/2308.04371)☆287Updated 2 months ago
- All available datasets for Instruction Tuning of Large Language Models☆237Updated 11 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆316Updated last month
- RewardBench: the first evaluation tool for reward models.☆431Updated 3 weeks ago
- A new tool learning benchmark aiming at well-balanced stability and reality, based on ToolBench.☆114Updated 2 months ago
- [NeurIPS 2022] 🛒WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents☆276Updated 2 months ago
- Self-Alignment with Principle-Following Reward Models☆148Updated 8 months ago
- [NIPS2023] RRHF & Wombat☆798Updated last year
- Paper collection on building and evaluating language model agents via executable language grounding☆339Updated 6 months ago
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆317Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆104Updated 5 months ago
- SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks☆279Updated 3 weeks ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆266Updated last year
- ☆708Updated 5 months ago