Open-Social-World / autolibraLinks
AutoLibra: Metric Induction for Agents from Open-Ended Human Feedback
☆17Updated 3 months ago
Alternatives and similar repositories for autolibra
Users that are interested in autolibra are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2025] What Makes a Reward Model a Good Teacher? An Optimization Perspective☆42Updated 4 months ago
- ☆32Updated last year
- ☆14Updated 9 months ago
- The official repository of "SmartAgent: Chain-of-User-Thought for Embodied Personalized Agent in Cyber World".☆27Updated 5 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆29Updated last year
- ☆13Updated 6 months ago
- ☆16Updated 8 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Updated last year
- ☆20Updated 2 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 9 months ago
- ☆31Updated last year
- ☆17Updated 6 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆51Updated 8 months ago
- ☆25Updated 9 months ago
- Official Code Repository for [AutoScale📈: Scale-Aware Data Mixing for Pre-Training LLMs] Published as a conference paper at **COLM 2025*…☆12Updated 5 months ago
- ☆11Updated 2 years ago
- SCoRe: Training Language Models to Self-Correct via Reinforcement Learning☆15Updated last year
- ☆50Updated 11 months ago
- Official code for the paper: WALL-E: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents☆55Updated last month
- From Accuracy to Robustness: A Study of Rule- and Model-based Verifiers in Mathematical Reasoning.☆24Updated 3 months ago
- DataSciBench: An LLM Agent Benchmark for Data Science☆50Updated last week
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated 2 years ago
- KnowRL: Exploring Knowledgeable Reinforcement Learning for Factuality☆39Updated 2 months ago
- ☆72Updated 7 months ago
- Reinforced Multi-LLM Agents training☆69Updated 2 weeks ago
- ☆19Updated last year
- Rewarded soups official implementation☆62Updated 2 years ago
- Official code implementation for the ACL 2025 paper: 'Dynamic Scaling of Unit Tests for Code Reward Modeling'☆27Updated 8 months ago
- Directional Preference Alignment☆58Updated last year
- RAG-RewardBench: Benchmarking Reward Models in Retrieval Augmented Generation for Preference Alignment☆16Updated last year