princeton-nlp / QuRatingLinks
[ICML 2024] Selecting High-Quality Data for Training Language Models
β185Updated last year
Alternatives and similar repositories for QuRating
Users that are interested in QuRating are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] 𧬠RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)β163Updated 6 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodingsβ157Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuningβ169Updated last month
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learningβ163Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don'tβ¦β115Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"β170Updated 3 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialoguesβ105Updated last year
- β104Updated last month
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".β80Updated 7 months ago
- Code implementation of synthetic continued pretrainingβ123Updated 7 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.β132Updated 2 years ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracyβ69Updated 8 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?β83Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"β219Updated 5 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.β64Updated 9 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Followingβ129Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.β80Updated 2 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"β75Updated last year
- β68Updated last year
- β48Updated 3 months ago
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Modelsβ56Updated last year
- Do Large Language Models Know What They Donβt Know?β99Updated 9 months ago
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Modelsβ110Updated 2 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]β74Updated 9 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Modelsβ185Updated 10 months ago
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Lβ¦β51Updated last year
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsβ267Updated 11 months ago
- β53Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)β91Updated 6 months ago
- Collection of papers for scalable automated alignment.β93Updated 10 months ago