samuelarnesen / nyu-debate-modelingLinks
☆21Updated 8 months ago
Alternatives and similar repositories for nyu-debate-modeling
Users that are interested in nyu-debate-modeling are comparing it to the libraries listed below
Sorting:
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 11 months ago
- The repository contains code for Adaptive Data Optimization☆24Updated 5 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- ☆29Updated 10 months ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆70Updated 11 months ago
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆42Updated this week
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 2 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆45Updated 5 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 4 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- ☆48Updated 3 weeks ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated last year
- Measuring the situational awareness of language models☆35Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 6 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 8 months ago
- Sparse Autoencoder Training Library☆52Updated last month
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆89Updated last week
- ☆45Updated last year
- Exploration of automated dataset selection approaches at large scales.☆41Updated 3 months ago
- ☆27Updated this week
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ☆29Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated 11 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year