agi-templar / Stable-AlignmentLinks
Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Language Models in Simulated Human Society".
☆354Updated 2 years ago
Alternatives and similar repositories for Stable-Alignment
Users that are interested in Stable-Alignment are comparing it to the libraries listed below
Sorting:
- Paper List for a new paradigm of NLP: Interactive NLP (https://arxiv.org/abs/2305.13246)☆217Updated 2 years ago
- ☆281Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆360Updated 2 years ago
- This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"☆209Updated 2 years ago
- ☆249Updated 3 years ago
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆840Updated last year
- Generative Judge for Evaluating Alignment☆249Updated 2 years ago
- FireAct: Toward Language Agent Fine-tuning☆291Updated 2 years ago
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedback☆207Updated 2 years ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆267Updated last year
- [NIPS2023] RRHF & Wombat☆809Updated 2 years ago
- Datasets for Instruction Tuning of Large Language Models☆260Updated 2 years ago
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]☆380Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆386Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆221Updated 2 years ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆303Updated 11 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆540Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆137Updated 8 months ago
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- Prod Env☆436Updated 2 years ago
- Accompanying repo for the RLPrompt paper☆358Updated last year
- ☆143Updated 2 years ago
- Paper collection on building and evaluating language model agents via executable language grounding☆363Updated last year
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆284Updated 2 years ago
- ☆770Updated last year
- papers related to LLM-agent that published on top conferences☆320Updated 9 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆132Updated last year
- SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks☆323Updated last year