Oliver-FutureAI / Awesome-MoE
Awesome list of Mixture-of-Experts (MoE)
☆20Updated 10 months ago
Alternatives and similar repositories for Awesome-MoE:
Users that are interested in Awesome-MoE are comparing it to the libraries listed below
- The official github repo for "Test-Time Training with Masked Autoencoders"☆82Updated last year
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆44Updated 3 months ago
- ☆16Updated 5 months ago
- ☆108Updated last year
- Official implementation for 'Class-Balancing Diffusion Models'☆54Updated 11 months ago
- The PyTorch implementation for "DEAL: Disentangle and Localize Concept-level Explanations for VLMs" (ECCV 2024 Strong Double Blind)☆19Updated 5 months ago
- Distilling Large Vision-Language Model with Out-of-Distribution Generalizability (ICCV 2023)☆56Updated last year
- Code for our ICML'24 on multimodal dataset distillation☆37Updated 6 months ago
- Prioritize Alignment in Dataset Distillation☆20Updated 4 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆77Updated last month
- ☆128Updated 9 months ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆39Updated 3 months ago
- source code for NeurIPS'23 paper "Dream the Impossible: Outlier Imagination with Diffusion Models"☆68Updated 2 months ago
- [CVPR 2025 (Oral)] Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the Key☆48Updated last week
- Code for ICML 2024 paper (Oral) — Test-Time Model Adaptation with Only Forward Passes☆72Updated 7 months ago
- ☆17Updated 5 months ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆82Updated 6 months ago
- ☆16Updated 10 months ago
- ☆85Updated 2 years ago
- Instruction Tuning in Continual Learning paradigm☆47Updated 2 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆57Updated 4 months ago
- Official Code for NeurIPS 2022 Paper: How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders☆58Updated last year
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆66Updated last month
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆81Updated last year
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆60Updated 11 months ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆102Updated 10 months ago
- [NeurIPS 2022] Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering☆47Updated last year
- [ICLR24] AutoVP: An Automated Visual Prompting Framework and Benchmark☆18Updated last year
- PyTorch implementation of paper "Sparse Parameterization for Epitomic Dataset Distillation" in NeurIPS 2023.☆20Updated 9 months ago
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆90Updated last year