PKU-Alignment / AlignmentSurvey
AI Alignment: A Comprehensive Survey
☆128Updated last year
Related projects ⓘ
Alternatives and complementary repositories for AlignmentSurvey
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆119Updated last week
- A curated reading list for large language model (LLM) alignment. Take a look at our new survey "Large Language Model Alignment: A Survey"…☆71Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆151Updated 11 months ago
- The related works and background techniques about Openai o1☆142Updated last week
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆87Updated 7 months ago
- Feeling confused about super alignment? Here is a reading list☆43Updated 10 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆199Updated 3 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆285Updated 4 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆84Updated 4 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆219Updated 2 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆316Updated last month
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆146Updated last month
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆36Updated 3 months ago
- A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)☆138Updated last month
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆107Updated 4 months ago
- Paper List for a new paradigm of NLP: Interactive NLP (https://arxiv.org/abs/2305.13246)☆213Updated last year
- ☆71Updated 10 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆111Updated last year
- ☆130Updated 6 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆74Updated 9 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆145Updated 5 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆127Updated 2 months ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆275Updated last year
- Collection of papers for scalable automated alignment.☆73Updated 3 weeks ago
- [SIGIR'24] The official implementation code of MOELoRA.☆124Updated 3 months ago
- ☆101Updated 5 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆100Updated 2 weeks ago
- ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆62Updated 7 months ago
- InsTag: A Tool for Data Analysis in LLM Supervised Fine-tuning☆218Updated last year
- Paper collections of methods that using language to interact with environment, including interact with real world, simulated world or WWW…☆123Updated last year