PKU-Alignment / AlignmentSurveyLinks
AI Alignment: A Comprehensive Survey
☆137Updated 2 years ago
Alternatives and similar repositories for AlignmentSurvey
Users that are interested in AlignmentSurvey are comparing it to the libraries listed below
Sorting:
- A curated reading list for large language model (LLM) alignment. Take a look at our new survey "Large Language Model Alignment: A Survey"…☆81Updated 2 years ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆190Updated 10 months ago
- Feeling confused about super alignment? Here is a reading list☆43Updated last year
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆198Updated last year
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆145Updated last year
- Paper List for a new paradigm of NLP: Interactive NLP (https://arxiv.org/abs/2305.13246)☆216Updated 2 years ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆148Updated 9 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆268Updated last year
- Fantastic Data Engineering for Large Language Models☆92Updated 11 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆64Updated 11 months ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- ☆34Updated last year
- The related works and background techniques about Openai o1☆221Updated 11 months ago
- On Memorization of Large Language Models in Logical Reasoning☆72Updated 8 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)☆192Updated 4 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆80Updated 2 years ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆52Updated last year
- Collection of papers for scalable automated alignment.☆94Updated last year
- Code for the paper <SelfCheck: Using LLMs to Zero-Shot Check Their Own Step-by-Step Reasoning>☆49Updated 2 years ago
- ☆162Updated 10 months ago
- This repo aims to record resource of role-playing abilities in LLMs, including dataset, paper, application, etc.☆133Updated last year
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆66Updated 4 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆84Updated last year
- ☆142Updated 2 years ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- [ACL 2025] A Neural-Symbolic Self-Training Framework☆117Updated 6 months ago