ZunhaiSu / Super-Experts-ProfillingLinks
Unveiling Super Experts in Mixture-of-Experts Large Language Models
☆22Updated last month
Alternatives and similar repositories for Super-Experts-Profilling
Users that are interested in Super-Experts-Profilling are comparing it to the libraries listed below
Sorting:
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆26Updated last year
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆65Updated 5 months ago
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆32Updated 3 months ago
- Efficient Mixture of Experts for LLM Paper List☆118Updated this week
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆72Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆92Updated 9 months ago
- qwen-nsa☆74Updated 4 months ago
- ☆117Updated 2 months ago
- [ICML 2025 Oral] Mixture of Lookup Experts☆51Updated 3 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 7 months ago
- Source code for the paper "LongGenBench: Long-context Generation Benchmark"☆23Updated 10 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆51Updated 10 months ago
- Open-Pandora: On-the-fly Control Video Generation☆34Updated 9 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆64Updated 5 months ago
- ☆20Updated 4 months ago
- ☆69Updated 2 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆62Updated last year
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆24Updated 5 months ago
- ☆38Updated last month
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆93Updated 2 months ago
- patches for huggingface transformers to save memory☆27Updated 3 months ago
- ☆14Updated last year
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆180Updated last month
- ☆39Updated 3 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆59Updated last year
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆189Updated 2 months ago
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆62Updated this week