duterscmy / CD-MoELinks
Official PyTorch implementation of CD-MOE
☆11Updated 3 months ago
Alternatives and similar repositories for CD-MoE
Users that are interested in CD-MoE are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆15Updated 5 months ago
- KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference☆15Updated 2 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆27Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated 11 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆42Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆31Updated last year
- Code Repository for the NeurIPS 2024 Paper "Toward Efficient Inference for Mixture of Experts".☆19Updated 8 months ago
- ☆22Updated 3 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆89Updated 7 months ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆49Updated last year
- DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training (ICLR 2023)☆31Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆18Updated last month
- Lottery Ticket Adaptation☆39Updated 7 months ago
- ACL 2023☆39Updated 2 years ago
- ☆28Updated 11 months ago
- ☆21Updated 2 years ago
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆47Updated 2 years ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆23Updated 8 months ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.☆51Updated 2 years ago
- ☆21Updated 2 years ago
- ☆16Updated last year
- ☆58Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆52Updated 2 years ago
- Source code for IJCAI 2022 Long paper: Parameter-Efficient Sparsity for Large Language Models Fine-Tuning.☆14Updated 3 years ago