Tebmer / Awesome-Knowledge-Distillation-of-LLMs
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & Vertical Distillation of LLMs.
β856Updated this week
Alternatives and similar repositories for Awesome-Knowledge-Distillation-of-LLMs:
Users that are interested in Awesome-Knowledge-Distillation-of-LLMs are comparing it to the libraries listed below
- π° Must-read papers and blogs on LLM based Long Context Modeling π₯β1,286Updated this week
- [TMLR 2024] Efficient Large Language Models: A Surveyβ1,105Updated this week
- A curated list for Efficient Large Language Modelsβ1,478Updated 2 weeks ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Sirenβs Song in the AI Ocean: A Survey on Hallucination in Large β¦β989Updated 3 months ago
- ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search (NeurIPS 2024)β580Updated last month
- A collection of AWESOME things about mixture-of-expertsβ1,057Updated 2 months ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichβ¦β962Updated 4 months ago
- β·οΈ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)β926Updated 2 months ago
- Large Reasoning Modelsβ800Updated 3 months ago
- Must-read Papers on Knowledge Editing for Large Language Models.β1,019Updated 2 weeks ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.β326Updated this week
- Awesome LLM compression research papers and tools.β1,393Updated this week
- O1 Replication Journeyβ1,964Updated last month
- An Awesome Collection for LLM Surveyβ328Updated 5 months ago
- A series of technical report on Slow Thinking with LLMβ438Updated this week
- β483Updated 2 months ago
- π° Must-read papers and blogs on Speculative Decoding β‘οΈβ609Updated this week
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuningβ412Updated 4 months ago
- The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".β260Updated last month
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruningβ587Updated 11 months ago
- Recipes to train reward model for RLHF.β1,205Updated 3 weeks ago
- awesome papers in LLM interpretabilityβ405Updated last month
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]β535Updated 2 months ago
- β¨β¨Latest Papers and Benchmarks in Reasoning with Foundation Modelsβ523Updated 2 months ago
- LongBench v2 and LongBench (ACL 2024)β788Updated last month
- β412Updated last week
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Rewardβ835Updated 2 weeks ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).β808Updated this week
- Official repository for ICLR 2025 paper "Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing". Your efficient anβ¦β638Updated 2 weeks ago
- Paper List for In-context Learning π·β842Updated 4 months ago