tml-epfl / icl-alignmentLinks
Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]
☆30Updated 5 months ago
Alternatives and similar repositories for icl-alignment
Users that are interested in icl-alignment are comparing it to the libraries listed below
Sorting:
- The repository contains code for Adaptive Data Optimization☆25Updated 7 months ago
- Exploration of automated dataset selection approaches at large scales.☆47Updated 4 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- ☆45Updated last year
- ☆14Updated last year
- ☆27Updated 5 months ago
- Codebase for Instruction Following without Instruction Tuning☆35Updated 9 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated 2 months ago
- ☆20Updated last year
- ☆19Updated last year
- ☆44Updated 10 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 3 months ago
- ☆17Updated last year
- ☆14Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆89Updated last year
- ☆29Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 5 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆56Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated 3 months ago
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆27Updated 4 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 5 months ago
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- ☆19Updated 10 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆90Updated 7 months ago
- ☆51Updated 3 months ago
- Offcial Repo of Paper "Eliminating Position Bias of Language Models: A Mechanistic Approach""☆14Updated last month
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated last year