pphuc25 / distil-cdLinks
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation
☆35Updated last year
Alternatives and similar repositories for distil-cd
Users that are interested in distil-cd are comparing it to the libraries listed below
Sorting:
- ☆269Updated last year
- VNHSGE: Vietnamese High School Graduation Examination Dataset for Large Language Models☆27Updated 2 years ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆225Updated 4 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆81Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆174Updated 4 months ago
- ☆118Updated 4 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆428Updated last year
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆38Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- LibMoE: A LIBRARY FOR COMPREHENSIVE BENCHMARKING MIXTURE OF EXPERTS IN LARGE LANGUAGE MODELS☆40Updated 2 months ago
- [ACL 2024 Demo] SeaLLMs - Large Language Models for Southeast Asia☆170Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated 10 months ago
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆97Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆90Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆167Updated last year
- [ACL 2024] LangBridge: Multilingual Reasoning Without Multilingual Supervision☆93Updated 9 months ago
- Code for "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆76Updated 9 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆105Updated 5 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆360Updated 11 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆56Updated 9 months ago
- DSIR large-scale data selection framework for language model training☆258Updated last year
- Prune transformer layers☆69Updated last year
- This is an open-source repository for constructing and researching fusion-style deep learning methods combined with pretrained vision mod…☆14Updated 7 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆166Updated last month
- Code for ACL 2024 paper "Soft Self-Consistency Improves Language Model Agents"☆23Updated 11 months ago
- [EMNLP 2024 (Oral)] Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA☆139Updated 9 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆171Updated last month
- X-LoRA: Mixture of LoRA Experts☆232Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆112Updated this week
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆245Updated last year