aitsc / GLMKD
Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
☆31Updated last year
Alternatives and similar repositories for GLMKD:
Users that are interested in GLMKD are comparing it to the libraries listed below
- ☆93Updated 3 months ago
- 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training☆97Updated 3 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆155Updated 6 months ago
- ☆47Updated 9 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆75Updated 10 months ago
- Towards Systematic Measurement for Long Text Quality☆31Updated 4 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆154Updated 7 months ago
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- [ACL 2024] MT-Bench-101: A Fine-Grained Benchmark for Evaluating Large Language Models in Multi-Turn Dialogues☆61Updated 5 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆57Updated 2 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆34Updated 9 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated 3 weeks ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆128Updated last year
- ☆60Updated 2 years ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆137Updated 4 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆55Updated 6 months ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆97Updated 2 years ago
- Implementations of online merging optimizers proposed by Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment☆70Updated 7 months ago
- ☆30Updated 4 months ago
- ☆118Updated 5 months ago
- ☆28Updated last year
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models".☆40Updated 2 months ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆22Updated 5 months ago
- Retrieval as Attention☆83Updated 2 years ago
- ☆39Updated last year
- ☆18Updated 2 years ago
- Code implementation of synthetic continued pretraining☆79Updated last week
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆171Updated 3 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆38Updated last year
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year