lhallee / Multi_Head_Mixture_of_Experts__MH-MOELinks
☆29Updated last year
Alternatives and similar repositories for Multi_Head_Mixture_of_Experts__MH-MOE
Users that are interested in Multi_Head_Mixture_of_Experts__MH-MOE are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆66Updated 2 years ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆69Updated 4 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆110Updated last week
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆229Updated last month
- ☆16Updated 5 months ago
- A repository for DenseSSMs☆89Updated last year
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆115Updated last month
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆135Updated 7 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆87Updated 6 months ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆57Updated last year
- ☆48Updated last year
- ☆41Updated last year
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆78Updated 2 years ago
- Implementation of ViTaR: ViTAR: Vision Transformer with Any Resolution in PyTorch☆38Updated last year
- State Space Models☆71Updated last year
- ☆50Updated 10 months ago
- ☆91Updated 2 years ago
- 🔥MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer [Official, ICLR 2023]☆21Updated 2 years ago
- ☆149Updated last year
- Awesome list of papers that extend Mamba to various applications.☆139Updated 5 months ago
- PyTorch implementation of LIMoE☆52Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆49Updated last year
- 【NeurIPS 2024】Official implementation of "Visual Fourier Prompt Tuning"☆36Updated 10 months ago
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling & Bootstrap Masked Visual Modeling via Hard Patch Mining☆105Updated 7 months ago
- Official Implementation of DiffCLIP: Differential Attention Meets CLIP☆47Updated 8 months ago
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with…☆36Updated last month
- [NAACL 2025] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning☆19Updated 6 months ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Updated last year
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆66Updated 2 years ago