Generative Judge for Evaluating Alignment
☆250Jan 18, 2024Updated 2 years ago
Alternatives and similar repositories for auto-j
Users that are interested in auto-j are comparing it to the libraries listed below
Sorting:
- Evaluate the Quality of Critique☆36Jun 1, 2024Updated last year
- ☆25May 16, 2024Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆363Dec 29, 2023Updated 2 years ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆107Mar 6, 2025Updated 11 months ago
- ☆78May 22, 2024Updated last year
- ☆51Mar 2, 2024Updated 2 years ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Aug 15, 2024Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆77Oct 9, 2025Updated 4 months ago
- SOTA Math Opensource LLM☆335Dec 12, 2023Updated 2 years ago
- ☆282Jan 6, 2025Updated last year
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆81Jun 19, 2024Updated last year
- [ACL 2024] Code for "MoPS: Modular Story Premise Synthesis for Open-Ended Automatic Story Generation"☆42Jul 19, 2024Updated last year
- Collections of RLxLM experiments using minimal codes☆14Feb 17, 2025Updated last year
- ☆13Jul 14, 2024Updated last year
- ☆148Jul 1, 2024Updated last year
- RewardBench: the first evaluation tool for reward models.☆697Feb 16, 2026Updated 2 weeks ago
- ☆922May 22, 2024Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Feb 22, 2024Updated 2 years ago
- Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"☆10Dec 13, 2024Updated last year
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆49Nov 29, 2024Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆421Oct 25, 2025Updated 4 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Feb 15, 2024Updated 2 years ago
- [ICLR 2025 Spotlight] An open-sourced LLM judge for evaluating LLM-generated answers.☆420Feb 11, 2025Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- [AAAI 2025] Augmenting Math Word Problems via Iterative Question Composing (https://arxiv.org/abs/2401.09003)☆23Oct 2, 2025Updated 5 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆81Jan 18, 2024Updated 2 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆589Dec 9, 2024Updated last year
- ☆144Sep 10, 2023Updated 2 years ago
- ☆313Jun 9, 2024Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,589Nov 24, 2025Updated 3 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆266Jul 8, 2025Updated 7 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆446Oct 16, 2024Updated last year
- [NIPS2023] RRHF & Wombat☆809Sep 22, 2023Updated 2 years ago
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆354Jun 18, 2023Updated 2 years ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Jul 17, 2024Updated last year
- FacTool: Factuality Detection in Generative AI☆913Aug 19, 2024Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆842Jul 1, 2024Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆121Aug 16, 2023Updated 2 years ago