aitsc / GLMKD
Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
☆32Updated last year
Alternatives and similar repositories for GLMKD:
Users that are interested in GLMKD are comparing it to the libraries listed below
- ☆95Updated 4 months ago
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆106Updated this week
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆157Updated 8 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆129Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated 11 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated last month
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆55Updated last year
- Code implementation of synthetic continued pretraining☆88Updated last month
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- ☆30Updated 5 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 8 months ago
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆68Updated 6 months ago
- Llama-3-SynE: A Significantly Enhanced Version of Llama-3 with Advanced Scientific Reasoning and Chinese Language Capabilities | 继续预训练提升 …☆33Updated 2 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆142Updated 5 months ago
- ☆47Updated 10 months ago
- The code of paper "Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation" published at NeurIPS 202…☆44Updated 2 years ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆46Updated 11 months ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆22Updated 6 months ago
- Fantastic Data Engineering for Large Language Models☆71Updated last month
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆73Updated 8 months ago
- Towards Systematic Measurement for Long Text Quality☆31Updated 5 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆68Updated last month
- ☆105Updated last year
- Code for ACL2023 paper: Pre-Training to Learn in Context☆108Updated 6 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆49Updated 4 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆135Updated 3 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆35Updated 10 months ago
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆24Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆154Updated 2 months ago
- Retrieval as Attention☆83Updated 2 years ago