siyan-zhao / ICL_decision_boundaryLinks
official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233 [NeurIPS 2024]
☆19Updated last week
Alternatives and similar repositories for ICL_decision_boundary
Users that are interested in ICL_decision_boundary are comparing it to the libraries listed below
Sorting:
- ☆43Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆72Updated last month
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆103Updated 2 years ago
- ☆51Updated last year
- Rewarded soups official implementation☆58Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago
- ☆96Updated last year
- Test-time-training on nearest neighbors for large language models☆45Updated last year
- ☆71Updated 3 years ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 10 months ago
- A Sober Look at Language Model Reasoning☆81Updated last month
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆39Updated 4 months ago
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆38Updated 3 weeks ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆47Updated 9 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 2 weeks ago
- A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.☆45Updated 6 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆17Updated 8 months ago
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆38Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆173Updated last year
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆18Updated last year
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆20Updated 2 years ago
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆38Updated 5 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆85Updated 11 months ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆24Updated 7 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆175Updated 3 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆73Updated 10 months ago
- ☆50Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆81Updated 4 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆114Updated 4 months ago