princeton-pli / MeCoLinks
Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"
☆39Updated last month
Alternatives and similar repositories for MeCo
Users that are interested in MeCo are comparing it to the libraries listed below
Sorting:
- Codebase for Instruction Following without Instruction Tuning☆34Updated 8 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated 2 weeks ago
- The paper list of multilingual pre-trained models (Continual Updated).☆22Updated last year
- ☆35Updated last year
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆37Updated 3 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last month
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆57Updated 3 months ago
- Towards Systematic Measurement for Long Text Quality☆35Updated 9 months ago
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆17Updated 3 months ago
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆44Updated last month
- Suri: Multi-constraint instruction following for long-form text generation (EMNLP’24)☆23Updated 7 months ago
- ☆14Updated last year
- MUFFIN: Curating Multi-Faceted Instructions for Improving Instruction-Following☆16Updated 7 months ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated 10 months ago
- Long Context Extension and Generalization in LLMs☆57Updated 9 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆101Updated 8 months ago
- Revisiting Mid-training in the Era of RL Scaling☆56Updated last month
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- ☆30Updated 5 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 4 months ago
- The code and data for the paper JiuZhang3.0☆46Updated last year
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆85Updated 8 months ago
- ☆18Updated 6 months ago
- [ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teaches☆47Updated 3 months ago
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆45Updated 7 months ago
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆53Updated 10 months ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆33Updated last year
- ☆64Updated last year
- DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails☆24Updated 3 months ago