hkust-nlp / PreSelectLinks
[ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teaches
☆47Updated 3 months ago
Alternatives and similar repositories for PreSelect
Users that are interested in PreSelect are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆37Updated 3 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆105Updated 5 months ago
- Revisiting Mid-training in the Era of RL Scaling☆56Updated last month
- ☆68Updated 3 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆60Updated 2 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆98Updated last month
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 3 months ago
- ☆65Updated last year
- Codebase for Instruction Following without Instruction Tuning☆34Updated 8 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆85Updated 8 months ago
- Exploration of automated dataset selection approaches at large scales.☆44Updated 3 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains☆141Updated last week
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆57Updated 3 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆157Updated 2 weeks ago
- Long Context Extension and Generalization in LLMs☆57Updated 9 months ago
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆39Updated last month
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆88Updated last month
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆189Updated 3 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆25Updated 3 months ago
- ☆59Updated 9 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last month
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆44Updated last month
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆138Updated 9 months ago
- Replicating O1 inference-time scaling laws☆87Updated 6 months ago
- ☆96Updated 8 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆76Updated 6 months ago
- ☆79Updated 10 months ago
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆54Updated last month
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆72Updated 2 months ago