imagination-research / EEPLinks
Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs
☆18Updated 8 months ago
Alternatives and similar repositories for EEP
Users that are interested in EEP are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆49Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆37Updated 11 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆42Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆76Updated 10 months ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆15Updated 6 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆64Updated 5 months ago
- LLM Inference with Microscaling Format☆27Updated 9 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆22Updated 6 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- ☆59Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆44Updated last year
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆55Updated 5 months ago
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".