agi-templar / Stable-AlignmentLinks
Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Language Models in Simulated Human Society".
☆351Updated 2 years ago
Alternatives and similar repositories for Stable-Alignment
Users that are interested in Stable-Alignment are comparing it to the libraries listed below
Sorting:
- A large-scale, fine-grained, diverse preference dataset (and models).☆354Updated last year
- Paper List for a new paradigm of NLP: Interactive NLP (https://arxiv.org/abs/2305.13246)☆213Updated 2 years ago
- This is the official implementation of "Progressive-Hint Prompting Improves Reasoning in Large Language Models"☆209Updated 2 years ago
- ☆280Updated 9 months ago
- ☆243Updated 2 years ago
- Simple next-token-prediction for RLHF☆226Updated 2 years ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆263Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆826Updated last year
- [NIPS2023] RRHF & Wombat☆811Updated 2 years ago
- FireAct: Toward Language Agent Fine-tuning☆283Updated 2 years ago
- Generative Judge for Evaluating Alignment☆247Updated last year
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedback☆207Updated 2 years ago
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆297Updated 8 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆266Updated last year
- Paper collection on building and evaluating language model agents via executable language grounding☆362Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆521Updated last year
- papers related to LLM-agent that published on top conferences☆320Updated 6 months ago
- SwiftSage: A Generative Agent with Fast and Slow Thinking for Complex Interactive Tasks☆316Updated last year
- Reasoning with Language Model is Planning with World Model☆175Updated 2 years ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆138Updated 5 months ago
- [TMLR] Cumulative Reasoning With Large Language Models (https://arxiv.org/abs/2308.04371)☆302Updated 2 months ago
- ☆141Updated 2 years ago
- Datasets for Instruction Tuning of Large Language Models☆257Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆131Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated last month
- Code and data for "MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning" [ICLR 2024]☆377Updated last year
- Data and Code for Program of Thoughts [TMLR 2023]☆292Updated last year
- Accompanying repo for the RLPrompt paper☆355Updated last year
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆283Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆381Updated 2 years ago