Oliver-FutureAI / Awesome-MoELinks
Awesome list of Mixture-of-Experts (MoE)
☆25Updated last year
Alternatives and similar repositories for Awesome-MoE
Users that are interested in Awesome-MoE are comparing it to the libraries listed below
Sorting:
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆104Updated last year
- Code for our ICML'24 on multimodal dataset distillation☆43Updated last year
- Official implementation for 'Class-Balancing Diffusion Models'☆54Updated last year
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆135Updated last year
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆80Updated 10 months ago
- A pytorch implementation of CVPR24 paper "D4M: Dataset Distillation via Disentangled Diffusion Model"☆38Updated last year
- [ICLR 2024] SemiReward: A General Reward Model for Semi-supervised Learning☆76Updated 2 months ago
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆246Updated last year
- ☆113Updated last year
- [ICCV 2023 Oral] Official Implementation of "Denoising Diffusion Autoencoders are Unified Self-supervised Learners"☆183Updated last month
- Official repo of M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning☆27Updated 9 months ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆94Updated last year
- Diffusion-TTA improves pre-trained discriminative models such as image classifiers or segmentors using pre-trained generative models.☆79Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year
- Official PyTorch implementation for "Diffusion Models and Semi-Supervised Learners Benefit Mutually with Few Labels"☆96Updated last year
- [CVPR2024 highlight] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (G-VBSM)☆28Updated last year
- ☆92Updated 2 years ago
- ☆13Updated 11 months ago
- AAAI 2024, M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy☆25Updated last year
- source code for NeurIPS'23 paper "Dream the Impossible: Outlier Imagination with Diffusion Models"☆72Updated 8 months ago
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆52Updated last year
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- [ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection☆86Updated 2 months ago
- [TPAMI 2024] Probabilistic Contrastive Learning for Long-Tailed Visual Recognition☆89Updated last year
- Code for ICML 2024 paper (Oral) — Test-Time Model Adaptation with Only Forward Passes☆92Updated last year
- ☆138Updated last year
- [ICLR 2025 Oral🔥] SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning☆75Updated 6 months ago
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆233Updated 7 months ago
- ☆16Updated last year
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆62Updated last year