ZunhaiSu / Super-Experts-ProfillingLinks
Unveiling Super Experts in Mixture-of-Experts Large Language Models
☆30Updated last month
Alternatives and similar repositories for Super-Experts-Profilling
Users that are interested in Super-Experts-Profilling are comparing it to the libraries listed below
Sorting:
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Due to the huge vocaburary size (151,936) of Qwen models, the Embedding and LM Head weights are excessively heavy. Therefore, this projec…☆29Updated last year
- qwen-nsa☆83Updated last month
- Efficient Mixture of Experts for LLM Paper List☆144Updated last month
- [NeurIPS 2024] Fast Best-of-N Decoding via Speculative Rejection☆52Updated last year
- siiRL: Shanghai Innovation Institute RL Framework for Advanced LLMs and Multi-Agent Systems☆226Updated this week
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆69Updated 7 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Updated last year
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆33Updated 5 months ago
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆63Updated last year
- Repository of LV-Eval Benchmark☆71Updated last year
- Multi-Candidate Speculative Decoding☆36Updated last year
- ☆29Updated 5 months ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆68Updated 11 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆130Updated 3 weeks ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆96Updated 11 months ago
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆99Updated 5 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆249Updated 3 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆41Updated last year
- ☆62Updated 2 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆145Updated 7 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆63Updated last year
- ☆120Updated 5 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆66Updated last year
- Token level visualization tools for large language models☆90Updated 10 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆109Updated last year
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆208Updated 9 months ago
- Open-Pandora: On-the-fly Control Video Generation☆35Updated 11 months ago
- ☆23Updated 7 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆125Updated 10 months ago