shadowkiller33 / Contrast-Instruction
☆19Updated last year
Alternatives and similar repositories for Contrast-Instruction:
Users that are interested in Contrast-Instruction are comparing it to the libraries listed below
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆29Updated last week
- ☆27Updated 10 months ago
- Augmenting Statistical Models with Natural Language Parameters☆22Updated 4 months ago
- This repository contains data, code and models for contextual noncompliance.☆19Updated 6 months ago
- ☆44Updated 4 months ago
- Tasks for describing differences between text distributions.☆16Updated 5 months ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆37Updated 2 years ago
- ☆34Updated 11 months ago
- [ACL 2023 Findings] What In-Context Learning “Learns” In-Context: Disentangling Task Recognition and Task Learning☆21Updated last year
- In-context Example Selection with Influences☆15Updated last year
- ☆34Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆19Updated 5 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated 8 months ago
- Restore safety in fine-tuned language models through task arithmetic☆26Updated 10 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆74Updated last year
- Evaluate the Quality of Critique☆35Updated 7 months ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.☆40Updated last year
- Analyzing LLM Alignment via Token distribution shift☆15Updated last year
- Code for preprint: Summarizing Differences between Text Distributions with Natural Language☆42Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆21Updated 7 months ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"☆53Updated last year
- ☆25Updated last year
- Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments (Zhou et al., EMNLP 2024)☆12Updated 3 months ago
- ☆11Updated 7 months ago
- Directional Preference Alignment☆54Updated 4 months ago
- Teaching Models to Express Their Uncertainty in Words☆36Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- ☆43Updated 5 months ago
- ☆22Updated 2 years ago