A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)
☆196Aug 6, 2025Updated 7 months ago
Alternatives and similar repositories for awesome-RLAIF
Users that are interested in awesome-RLAIF are comparing it to the libraries listed below
Sorting:
- ☆98Jun 27, 2024Updated last year
- A curated list of reinforcement learning with human feedback resources (continually updated)☆4,317Dec 9, 2025Updated 3 months ago
- 3D - NeRF++ Volume Rendering 시각화☆13Jan 24, 2022Updated 4 years ago
- ☆28Apr 3, 2025Updated 11 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆81Apr 10, 2023Updated 2 years ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆447May 14, 2025Updated 9 months ago
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆386Oct 4, 2023Updated 2 years ago
- Repository of paper "How Likely Do LLMs with CoT Mimic Human Reasoning?"☆23Feb 19, 2025Updated last year
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆45Jun 14, 2024Updated last year
- Recipes to train reward model for RLHF.☆1,518Apr 24, 2025Updated 10 months ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Nov 30, 2022Updated 3 years ago
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Jul 9, 2023Updated 2 years ago
- RewardBench: the first evaluation tool for reward models.☆702Feb 16, 2026Updated 3 weeks ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆307Sep 11, 2024Updated last year
- Directed masked autoencoders☆14Feb 20, 2026Updated 2 weeks ago
- This is the implementation for the NeurIPS 2022 paper: ZIN: When and How to Learn Invariance Without Environment Partition?☆22Dec 3, 2022Updated 3 years ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Aug 18, 2023Updated 2 years ago
- Code for Semi-crowdsourced Clustering with Deep Generative Models☆12Dec 9, 2022Updated 3 years ago
- Shaping Language Models with Cognitive Insights☆15Feb 29, 2024Updated 2 years ago
- 神经辐射场 论文学习☆10Sep 25, 2021Updated 4 years ago
- ☆12Oct 5, 2022Updated 3 years ago
- Code for the paper "SizeShiftReg: a Regularization Method for Improving Size-Generalization in Graph Neural Networks"☆12Jan 17, 2023Updated 3 years ago
- 使用自然语言绘制流程图,基于OpenAI☆12Nov 13, 2023Updated 2 years ago
- ☆18Aug 1, 2025Updated 7 months ago
- ☆13May 25, 2023Updated 2 years ago
- Official repo of Progressive Data Expansion: data, code and evaluation☆29Nov 16, 2023Updated 2 years ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆28Sep 25, 2024Updated last year
- Reference implementation for DPO (Direct Preference Optimization)☆2,861Aug 11, 2024Updated last year
- 🤓 A collection of AWESOME structured summaries of Large Language Models (LLMs)☆31Sep 7, 2023Updated 2 years ago
- ☆33Updated this week
- Feeling confused about super alignment? Here is a reading list☆44Jan 9, 2024Updated 2 years ago
- ☆12Feb 28, 2025Updated last year
- ☆13Sep 12, 2024Updated last year
- ☆46Jan 29, 2024Updated 2 years ago
- [ICML 2024] Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical☆12May 12, 2024Updated last year
- [NeurIPS 2023] "Combating Bilateral Edge Noise for Robust Link Prediction"☆11Nov 3, 2023Updated 2 years ago
- Code Repository for Blog - How to Productionize Large Language Models (LLMs)☆12Mar 27, 2024Updated last year
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)☆692Jan 20, 2025Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,084Updated this week