lhallee / Multi_Head_Mixture_of_Experts__MH-MOELinks
☆29Updated last year
Alternatives and similar repositories for Multi_Head_Mixture_of_Experts__MH-MOE
Users that are interested in Multi_Head_Mixture_of_Experts__MH-MOE are comparing it to the libraries listed below
Sorting:
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆68Updated 5 months ago
- A repository for DenseSSMs☆89Updated last year
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆231Updated 2 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆119Updated 2 months ago
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆67Updated 2 years ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆111Updated 2 weeks ago
- ☆23Updated last year
- ☆50Updated 10 months ago
- 【NeurIPS 2024】Official implementation of "Visual Fourier Prompt Tuning"☆36Updated 11 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆88Updated 6 months ago
- ☆41Updated last year
- Awesome list of papers that extend Mamba to various applications.☆139Updated 6 months ago
- [ECCV 2024] FlexAttention for Efficient High-Resolution Vision-Language Models☆46Updated 11 months ago
- ☆48Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆50Updated last year
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling & Bootstrap Masked Visual Modeling via Hard Patch Mining☆106Updated 8 months ago
- State Space Models☆71Updated last year
- ☆91Updated 2 years ago
- Implementation of ViTaR: ViTAR: Vision Transformer with Any Resolution in PyTorch☆38Updated last year
- ☆152Updated last year
- ☆53Updated 11 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆61Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆138Updated 8 months ago
- Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with…☆36Updated 2 months ago
- My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing o…☆44Updated last year
- Pytorch Implementation of the paper: "Learning to (Learn at Test Time): RNNs with Expressive Hidden States"☆25Updated 2 weeks ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆81Updated 2 years ago
- This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).☆42Updated last year
- ☆16Updated 6 months ago