princeton-pli / MeCoLinks
Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"
☆39Updated 3 weeks ago
Alternatives and similar repositories for MeCo
Users that are interested in MeCo are comparing it to the libraries listed below
Sorting:
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆49Updated 5 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 8 months ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆38Updated 2 weeks ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last week
- ☆35Updated last year
- Official implementation of the paper "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large L…☆48Updated 11 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆36Updated 3 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 7 months ago
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆22Updated 6 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆77Updated last year
- Towards Systematic Measurement for Long Text Quality☆35Updated 8 months ago
- A scalable automated alignment method for large language models. Resources for "Aligning Large Language Models via Self-Steering Optimiza…☆18Updated 6 months ago
- ☆100Updated 8 months ago
- ☆29Updated 5 months ago
- Long Context Extension and Generalization in LLMs☆56Updated 8 months ago
- Multi-GPU supported kmeans clustering for cluser-clip☆13Updated 11 months ago
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆16Updated 2 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆71Updated 6 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆55Updated 7 months ago
- The paper list of multilingual pre-trained models (Continual Updated).☆22Updated 11 months ago
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆75Updated 6 months ago
- The code and data for the paper JiuZhang3.0☆45Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated 10 months ago
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆21Updated 9 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆45Updated 7 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆78Updated last year
- Revisiting Mid-training in the Era of RL Scaling☆42Updated last month
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆84Updated 8 months ago