astramind-ai / Mixture-of-depthsLinks
Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"
☆172Updated last year
Alternatives and similar repositories for Mixture-of-depths
Users that are interested in Mixture-of-depths are comparing it to the libraries listed below
Sorting:
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆102Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆106Updated this week
- ☆230Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- ☆86Updated 8 months ago
- The official implementation of the paper "Towards Efficient Mixture of Experts: A Holistic Study of Compression Techniques (TMLR)".☆76Updated 6 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆155Updated 5 months ago
- Explorations into some recent techniques surrounding speculative decoding☆286Updated 9 months ago
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆197Updated 4 months ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆110Updated 6 months ago
- EE-LLM is a framework for large-scale training and inference of early-exit (EE) large language models (LLMs).☆67Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆229Updated 5 months ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆72Updated last year
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆177Updated 9 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆149Updated 6 months ago
- ☆280Updated 2 months ago
- ☆202Updated 10 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆352Updated 10 months ago
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆94Updated 3 months ago
- ☆270Updated last year
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆202Updated 7 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Updated last year
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆101Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆161Updated 5 months ago
- ☆119Updated 3 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆78Updated 11 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated last year
- [ICML 2024] CLLMs: Consistency Large Language Models☆404Updated 10 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆181Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆228Updated 3 weeks ago