allenai / easy-to-hard-generalizationLinks
Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"
☆48Updated last year
Alternatives and similar repositories for easy-to-hard-generalization
Users that are interested in easy-to-hard-generalization are comparing it to the libraries listed below
Sorting:
- ☆64Updated last year
- Codebase for Instruction Following without Instruction Tuning☆35Updated 9 months ago
- Exploration of automated dataset selection approaches at large scales.☆47Updated 4 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆56Updated last year
- CodeUltraFeedback: aligning large language models to coding preferences☆71Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- The repository contains code for Adaptive Data Optimization☆25Updated 7 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆33Updated 3 months ago
- [NeurIPS 2024] OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI☆102Updated 4 months ago
- ☆54Updated 2 weeks ago
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Astraios: Parameter-Efficient Instruction Tuning Code Language Models☆58Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- ☆33Updated 6 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- Long Context Extension and Generalization in LLMs☆57Updated 9 months ago
- [ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teaches☆51Updated 4 months ago
- Conic10K: A large-scale dataset for closed-vocabulary math problem understanding. Accepted to EMNLP2023 Findings.☆26Updated last year
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 9 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆82Updated last month
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- Repository for Skill Set Optimization☆14Updated 11 months ago
- ☆66Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Replicating O1 inference-time scaling laws☆89Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 10 months ago
- ☆82Updated 10 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆25Updated 3 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆64Updated last year