siyan-zhao / ICL_decision_boundary
official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233 [NeurIPS 2024]
☆18Updated 8 months ago
Alternatives and similar repositories for ICL_decision_boundary:
Users that are interested in ICL_decision_boundary are comparing it to the libraries listed below
- ☆40Updated last year
- ☆93Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆101Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated last month
- What Makes a Reward Model a Good Teacher? An Optimization Perspective☆26Updated 3 weeks ago
- ☆49Updated last year
- Rewarded soups official implementation☆57Updated last year
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆16Updated 5 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆36Updated last year
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆36Updated 9 months ago
- [NeurIPS 2023 Spotlight] Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training☆34Updated 3 weeks ago
- ☆23Updated 7 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆64Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 7 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆67Updated 4 months ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆22Updated 4 months ago
- Official code for "Decoding-Time Language Model Alignment with Multiple Objectives".☆22Updated 6 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆24Updated 11 months ago
- ☆66Updated 3 years ago
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆20Updated last year
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆33Updated 2 months ago
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆27Updated 11 months ago
- [ICLR 2025] Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization☆25Updated 3 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆76Updated last month
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 3 weeks ago
- ☆30Updated 9 months ago
- ☆47Updated last year
- Forcing Diffuse Distributions out of Language Models☆15Updated 7 months ago
- Test-time-training on nearest neighbors for large language models☆40Updated last year