vatsal0 / default-moeLinks
☆18Updated 8 months ago
Alternatives and similar repositories for default-moe
Users that are interested in default-moe are comparing it to the libraries listed below
Sorting:
- ☆27Updated 9 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- ☆19Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆41Updated last week
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆29Updated last year
- Are gradient information useful for pruning of LLMs?☆47Updated 4 months ago
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆20Updated 6 months ago
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated 2 months ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Updated last year
- ☆127Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆56Updated 11 months ago
- ☆71Updated last year
- ☆62Updated 2 years ago
- Official Pytorch Implementation of Paper "DarwinLM: Evolutionary Structured Pruning of Large Language Models"☆20Updated 10 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆83Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆121Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆29Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated last year
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆65Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]☆89Updated last year
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆34Updated 6 months ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Updated last year
- ☆85Updated last month
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆27Updated 5 months ago
- ☆133Updated 7 months ago