siyan-zhao / ICL_decision_boundaryLinks
official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233 [NeurIPS 2024]
☆18Updated 9 months ago
Alternatives and similar repositories for ICL_decision_boundary
Users that are interested in ICL_decision_boundary are comparing it to the libraries listed below
Sorting:
- ☆40Updated last year
- Rewarded soups official implementation☆58Updated last year
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆32Updated this week
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆24Updated 7 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated last year
- A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.☆44Updated 5 months ago
- ☆49Updated last year
- A Sober Look at Language Model Reasoning☆74Updated last week
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆72Updated 2 weeks ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆80Updated 10 months ago
- Test-time-training on nearest neighbors for large language models☆41Updated last year
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆16Updated 7 months ago
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆37Updated 10 months ago
- ☆29Updated last year
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆34Updated 5 months ago
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆37Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆121Updated 9 months ago
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆28Updated 2 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆16Updated 5 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆26Updated last year
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆20Updated last year
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts☆16Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 3 months ago
- ☆44Updated 3 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆105Updated 2 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆29Updated 5 months ago
- Bayesian low-rank adaptation for large language models☆23Updated last year
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆62Updated 2 months ago