cambridgeltl / PairSLinks
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)
☆47Updated 6 months ago
Alternatives and similar repositories for PairS
Users that are interested in PairS are comparing it to the libraries listed below
Sorting:
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆57Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 8 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Codebase for Instruction Following without Instruction Tuning☆35Updated 10 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆81Updated 11 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated last year
- ☆125Updated 10 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆73Updated last year
- ☆29Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 11 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆72Updated 8 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆146Updated 9 months ago
- ☆49Updated 11 months ago
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆51Updated last month
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- ☆65Updated last year
- Code for "Can Retriever-Augmented Language Models Reason? The Blame Game Between the Retriever and the Language Model", EMNLP Findings 20…☆28Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- Self-Alignment with Principle-Following Reward Models☆162Updated 2 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆55Updated 10 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆114Updated last year
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆45Updated last year
- Replicating O1 inference-time scaling laws☆89Updated 8 months ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆49Updated 8 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆59Updated 6 months ago