shaochenze / PatchTrain
Code for paper "Patch-Level Training for Large Language Models"
☆67Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for PatchTrain
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆72Updated 7 months ago
- We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆43Updated 2 weeks ago
- 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training☆87Updated last month
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆36Updated 8 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆111Updated last week
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆67Updated 5 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models".☆36Updated this week
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆34Updated 7 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆45Updated 4 months ago
- ☆89Updated last month
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆46Updated 3 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆141Updated 4 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models☆47Updated last month
- PyTorch implementation of StableMask (ICML'24)☆11Updated 4 months ago
- 🍼 Official implementation of Dynamic Data Mixing Maximizes Instruction Tuning for Mixture-of-Experts☆34Updated last month
- Code for Suri: Multi-constraint instruction following for long-form text generation☆17Updated 2 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆30Updated last month
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆67Updated last month
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆56Updated 8 months ago
- Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆59Updated this week
- Intuitive Fine-Tuning: Towards Simplifying Alignment into a Single Process☆22Updated 3 months ago
- The official repository of the Omni-MATH benchmark.☆45Updated last week
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆30Updated 3 weeks ago
- Long Context Extension and Generalization in LLMs☆39Updated last month
- Fantastic Data Engineering for Large Language Models☆49Updated 3 months ago
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆25Updated 3 months ago
- "Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiw…☆22Updated 6 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆30Updated last year
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆30Updated 3 months ago