cambridgeltl / zepoLinks
Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments (Zhou et al., EMNLP 2024)
☆13Updated 8 months ago
Alternatives and similar repositories for zepo
Users that are interested in zepo are comparing it to the libraries listed below
Sorting:
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆60Updated 2 years ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆55Updated 2 years ago
- ☆28Updated last year
- In-context Example Selection with Influences☆15Updated 2 years ago
- ☆12Updated 11 months ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated 10 months ago
- Code for preprint: Summarizing Differences between Text Distributions with Natural Language☆42Updated 2 years ago
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆24Updated last year
- Few-shot Learning with Auxiliary Data☆28Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆75Updated last year
- ☆44Updated last year
- Teaching Models to Express Their Uncertainty in Words☆39Updated 3 years ago
- This repository contains data, code and models for contextual noncompliance.☆23Updated 11 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆47Updated 5 months ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆38Updated 2 years ago
- ☆44Updated 9 months ago
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/☆24Updated 3 months ago
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆40Updated 2 years ago
- ☆41Updated last year
- ☆19Updated last year
- ☆14Updated last year
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆16Updated 5 months ago
- ☆22Updated 2 years ago
- ☆13Updated 6 months ago
- Models, data, and codes for the paper: MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models☆19Updated 9 months ago
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- Code for paper "Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?"☆22Updated 4 years ago
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".☆23Updated 3 months ago
- Self-Supervised Alignment with Mutual Information☆19Updated last year