jongwooko / distillm-2Links
Official PyTorch implementation of DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs (ICML 2025 Oral)
☆39Updated 2 months ago
Alternatives and similar repositories for distillm-2
Users that are interested in distillm-2 are comparing it to the libraries listed below
Sorting:
- ☆58Updated 9 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆69Updated 6 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆83Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆179Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆104Updated 2 years ago
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
- A curated list of Model Merging methods.☆92Updated last year
- ☆74Updated 3 years ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆89Updated 10 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated 11 months ago
- ☆190Updated last year
- ☆30Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆124Updated 2 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆88Updated last year
- [ICLR 2023] "Learning to Grow Pretrained Models for Efficient Transformer Training" by Peihao Wang, Rameswar Panda, Lucas Torroba Hennige…☆92Updated last year
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆49Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆55Updated 2 years ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆38Updated 2 years ago
- ☆20Updated 9 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆131Updated 5 months ago
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆83Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- ☆17Updated 7 months ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 2 years ago
- Awesome-Low-Rank-Adaptation☆116Updated 11 months ago
- ☆25Updated last month
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆62Updated 6 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆49Updated 10 months ago