lhallee / Multi_Head_Mixture_of_Experts__MH-MOELinks
☆29Updated last year
Alternatives and similar repositories for Multi_Head_Mixture_of_Experts__MH-MOE
Users that are interested in Multi_Head_Mixture_of_Experts__MH-MOE are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆65Updated 2 years ago
- A repository for DenseSSMs☆89Updated last year
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆68Updated 4 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆228Updated 3 weeks ago
- ☆16Updated 4 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆109Updated this week
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆114Updated 2 weeks ago
- ☆41Updated last year
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆87Updated 5 months ago
- ☆47Updated last year
- a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity☆39Updated 5 months ago
- Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with…☆30Updated 2 weeks ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆34Updated last year
- State Space Models☆71Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated last week
- ☆148Updated last year
- [ECCV 2024] FlexAttention for Efficient High-Resolution Vision-Language Models☆46Updated 10 months ago
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling & Bootstrap Masked Visual Modeling via Hard Patch Mining☆105Updated 6 months ago
- ☆50Updated 9 months ago
- Awesome list of papers that extend Mamba to various applications.☆138Updated 4 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆59Updated last year
- Implementation of ViTaR: ViTAR: Vision Transformer with Any Resolution in PyTorch☆38Updated 11 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆133Updated 7 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆49Updated last year
- [NAACL 2025] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning☆19Updated 5 months ago
- ☆91Updated 2 years ago
- toy reproduction of Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts☆24Updated last year
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆96Updated 3 weeks ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆56Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆83Updated last year