aitsc / GLMKDLinks
Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
☆32Updated 2 years ago
Alternatives and similar repositories for GLMKD
Users that are interested in GLMKD are comparing it to the libraries listed below
Sorting:
- ☆108Updated 4 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆166Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆192Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆177Updated 9 months ago
- ☆105Updated 2 years ago
- ☆54Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆181Updated 4 months ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆79Updated last year
- ☆34Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆57Updated 2 years ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆47Updated 4 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated 2 years ago
- Towards Systematic Measurement for Long Text Quality☆37Updated last year
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆61Updated 2 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆40Updated last year
- DSIR large-scale data selection framework for language model training☆265Updated last year
- ☆141Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆76Updated last year
- ☆69Updated last year
- Code for our EMNLP-2023 paper: "Active Instruction Tuning: Improving Cross-Task Generalization by Training on Prompt Sensitive Tasks"☆25Updated 2 years ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆23Updated last year
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.