samuelarnesen / nyu-debate-modeling
☆19Updated last month
Related projects ⓘ
Alternatives and complementary repositories for nyu-debate-modeling
- The repository contains code for Adaptive Data Optimization☆19Updated last month
- ☆28Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆44Updated 10 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆62Updated 5 months ago
- Replicating O1 inference-time scaling laws☆51Updated last month
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆17Updated 10 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆61Updated last week
- ☆25Updated 4 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆41Updated 10 months ago
- ☆28Updated 5 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆17Updated 3 weeks ago
- ☆53Updated 3 weeks ago
- ☆90Updated 4 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs?☆25Updated 5 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆25Updated 6 months ago
- ☆33Updated 6 months ago
- ☆26Updated last year
- ☆35Updated 3 weeks ago
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆24Updated 3 weeks ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆48Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆46Updated 2 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆37Updated 5 months ago
- Lottery Ticket Adaptation☆37Updated this week
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated 9 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; arXiv preprint arXiv:2403.…☆37Updated 4 months ago
- ☆46Updated 2 weeks ago
- ☆24Updated 7 months ago
- ☆55Updated last month
- Using FlexAttention to compute attention with different masking patterns☆40Updated 2 months ago