tml-epfl / icl-alignmentLinks
Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]
☆31Updated 10 months ago
Alternatives and similar repositories for icl-alignment
Users that are interested in icl-alignment are comparing it to the libraries listed below
Sorting:
- The repository contains code for Adaptive Data Optimization☆28Updated 11 months ago
- Restore safety in fine-tuned language models through task arithmetic☆29Updated last year
- ☆15Updated last year
- ☆19Updated 2 years ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆20Updated last month
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆19Updated last year
- ☆51Updated last year
- ☆16Updated last year
- ☆51Updated 2 years ago
- ACL24☆10Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 7 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆38Updated 3 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 8 months ago
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆40Updated last year
- Exploration of automated dataset selection approaches at large scales.☆50Updated 9 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- ☆17Updated last year
- Codebase for Inference-Time Policy Adapters☆24Updated 2 years ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆30Updated 2 months ago
- ☆21Updated 3 months ago
- Data Valuation on In-Context Examples (ACL23)☆24Updated 10 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆93Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆24Updated 7 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆48Updated 6 months ago
- ☆41Updated 2 years ago
- ☆32Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- ☆29Updated last year