kyegomez / Mixture-of-Depths
Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"
☆86Updated 3 weeks ago
Alternatives and similar repositories for Mixture-of-Depths:
Users that are interested in Mixture-of-Depths are comparing it to the libraries listed below
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆34Updated 9 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆154Updated 9 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆86Updated 10 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 6 months ago
- ☆72Updated last week
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆77Updated 9 months ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆122Updated 3 months ago
- ☆74Updated 7 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆209Updated 3 weeks ago
- Efficient triton implementation of Native Sparse Attention.☆127Updated this week
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆141Updated 6 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆65Updated 3 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆102Updated 6 months ago
- The official implementation of the paper "Towards Efficient Mixture of Experts: A Holistic Study of Compression Techniques (TMLR)".☆62Updated last week
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆59Updated 5 months ago
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆108Updated 7 months ago
- 🔥 A minimal training framework for scaling FLA models☆92Updated last week
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆64Updated 11 months ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆80Updated 4 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆108Updated 2 weeks ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆39Updated last year
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆63Updated 11 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- ☆37Updated 5 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 3 months ago
- [NeurIPS 24 Spotlight] MaskLLM: Learnable Semi-structured Sparsity for Large Language Models☆159Updated 3 months ago
- ☆37Updated last week
- Enable Next-sentence Prediction for Large Language Models with Faster Speed, Higher Accuracy and Longer Context☆28Updated 7 months ago
- Some preliminary explorations of Mamba's context scaling.☆212Updated last year
- Work in progress.☆50Updated 2 weeks ago