duterscmy / CD-MoELinks
Official PyTorch implementation of CD-MOE
☆12Updated 8 months ago
Alternatives and similar repositories for CD-MoE
Users that are interested in CD-MoE are comparing it to the libraries listed below
Sorting:
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated last year
- KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference☆24Updated 6 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Updated last year
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Updated 3 months ago
- ☆14Updated 4 years ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Updated 9 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- ☆27Updated 8 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆48Updated 3 years ago
- ☆16Updated 2 years ago
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆20Updated 6 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated 3 months ago
- ACL 2023☆39Updated 2 years ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆31Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- ☆26Updated 2 years ago
- Kinetics: Rethinking Test-Time Scaling Laws☆84Updated 5 months ago
- Are gradient information useful for pruning of LLMs?☆47Updated 3 months ago
- ☆30Updated last year
- ☆62Updated 2 years ago
- ☆19Updated 3 years ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆21Updated last month