agi-templar / Stable-AlignmentLinks
Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Language Models in Simulated Human Society".
☆353Updated 2 years ago
Alternatives and similar repositories for Stable-Alignment
Users that are interested in Stable-Alignment are comparing it to the libraries listed below
Sorting:
- A large-scale, fine-grained, diverse preference dataset (and models).☆350Updated last year
- Paper List for a new paradigm of NLP: Interactive NLP (https://arxiv.org/abs/2305.13246)☆214Updated 2 years ago
- This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"☆209Updated last year
- ☆280Updated 8 months ago
- FireAct: Toward Language Agent Fine-tuning☆282Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆825Updated last year
- Simple next-token-prediction for RLHF☆227Updated last year
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedback☆207Updated 2 years ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆263Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆291Updated 7 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆266Updated last year
- ☆242Updated 2 years ago
- Generative Judge for Evaluating Alignment☆245Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆511Updated last year
- Datasets for Instruction Tuning of Large Language Models☆255Updated last year
- papers related to LLM-agent that published on top conferences☆317Updated 5 months ago
- [NIPS2023] RRHF & Wombat☆812Updated last year
- SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks☆315Updated 10 months ago
- Self-Alignment with Principle-Following Reward Models☆165Updated 4 months ago
- Reasoning with Language Model is Planning with World Model☆171Updated 2 years ago
- Paper collection on building and evaluating language model agents via executable language grounding☆361Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 4 months ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated 2 years ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆280Updated last year
- Official implementation of TMLR paper "Cumulative Reasoning With Large Language Models" (https://arxiv.org/abs/2308.04371)☆302Updated last month
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆129Updated last year
- Accompanying repo for the RLPrompt paper☆352Updated last year
- [NeurIPS 2022] 🛒WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents☆396Updated last year
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆277Updated 2 years ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆322Updated last year