Oliver-FutureAI / Awesome-MoELinks
Awesome list of Mixture-of-Experts (MoE)
☆21Updated last year
Alternatives and similar repositories for Awesome-MoE
Users that are interested in Awesome-MoE are comparing it to the libraries listed below
Sorting:
- A pytorch implementation of CVPR24 paper "D4M: Dataset Distillation via Disentangled Diffusion Model"☆34Updated last year
- Official implementation for 'Class-Balancing Diffusion Models'☆54Updated last year
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆98Updated last year
- ☆29Updated 2 years ago
- [ICLR 2025 Oral🔥] SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning☆58Updated 3 months ago
- Code for our ICML'24 on multimodal dataset distillation☆40Updated last year
- Code for ICML 2024 paper (Oral) — Test-Time Model Adaptation with Only Forward Passes☆87Updated last year
- Official repo of M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning☆27Updated 6 months ago
- source code for NeurIPS'23 paper "Dream the Impossible: Outlier Imagination with Diffusion Models"☆70Updated 6 months ago
- ☆112Updated last year
- [ICCV 2023] A Unified Continual Learning Framework with General Parameter-Efficient Tuning☆88Updated last year
- The official implementation of the CVPR'2024 work Interference-Free Low-Rank Adaptation for Continual Learning☆90Updated 7 months ago
- This is the official PyTorch Implementation of "SoTTA: Robust Test-Time Adaptation on Noisy Data Streams (NeurIPS '23)" by Taesik Gong*, …☆22Updated last year
- [TPAMI 2024] Probabilistic Contrastive Learning for Long-Tailed Visual Recognition☆85Updated last year
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆74Updated 7 months ago
- ☆91Updated 2 years ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆100Updated last year
- Code for ICLR 2023 paper (Oral) — Towards Stable Test-Time Adaptation in Dynamic Wild World☆191Updated 2 years ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆129Updated 11 months ago
- [NeurIPS 2024, spotlight] Scaling Out-of-Distribution Detection for Multiple Modalities☆66Updated 4 months ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆62Updated last year
- [NeurIPS 2023] Generalized Logit Adjustment☆38Updated last year
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆48Updated 9 months ago
- [ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection☆83Updated 4 months ago
- [ICML 2023] On Pitfalls of Test-Time Adaptation☆123Updated last year
- [ICLR 2024] SemiReward: A General Reward Model for Semi-supervised Learning☆72Updated last year
- Official PyTorch implementation for "Diffusion Models and Semi-Supervised Learners Benefit Mutually with Few Labels"☆95Updated last year
- Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024☆250Updated last month
- PyTorch implementation of our CVPR 2024 paper "Unified Entropy Optimization for Open-Set Test-Time Adaptation"☆28Updated last year
- The official github repo for "Test-Time Training with Masked Autoencoders"☆88Updated last year