SuperBruceJia / Awesome-Mixture-of-ExpertsLinks
Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME)
☆51Updated 2 months ago
Alternatives and similar repositories for Awesome-Mixture-of-Experts
Users that are interested in Awesome-Mixture-of-Experts are comparing it to the libraries listed below
Sorting:
- ☆135Updated 9 months ago
- [TMLR 2025] Efficient Reasoning Models: A Survey☆285Updated last month
- [ACM Computing Surveys 2025] This repository collects awesome survey, resource, and paper for Lifelong Learning with Large Language Model…☆159Updated 6 months ago
- Survey of Small Language Models from Penn State, ...☆229Updated last month
- MokA: Multimodal Low-Rank Adaptation for MLLMs☆58Updated 5 months ago
- A RLHF Infrastructure for Vision-Language Models☆189Updated last year
- 📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.☆317Updated last month
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆156Updated 7 months ago
- 😎 A Survey of Efficient Reasoning for Large Reasoning Models: Language, Multimodality, Agent, and Beyond☆321Updated 2 months ago
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆173Updated 2 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆150Updated 5 months ago
- Awesome Low-Rank Adaptation☆58Updated 4 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆200Updated last year
- A curated list of awesome Multimodal studies.☆301Updated last week
- [TKDE'25] The official GitHub page for the survey paper "A Survey on Mixture of Experts in Large Language Models".☆468Updated 5 months ago
- [ICLR 2025] Mitigating Modality Prior-Induced Hallucinations in Multimodal Large Language Models via Deciphering Attention Causality☆59Updated 5 months ago
- Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)☆35Updated 8 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆242Updated this week
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆233Updated 2 months ago
- Latest Advances on Reasoning of Multimodal Large Language Models (Multimodal R1 \ Visual R1) ) 🍓☆35Updated 8 months ago
- Visualizing the attention of vision-language models☆268Updated 9 months ago
- 📜 Paper list on decoding methods for LLMs and LVLMs☆67Updated last month
- Extrapolating RLVR to General Domains without Verifiers☆184Updated 4 months ago
- AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)☆380Updated 2 months ago
- ☆124Updated last year
- AdaMoLE: Adaptive Mixture of LoRA Experts☆38Updated last year
- ☆294Updated 5 months ago
- A Survey on Benchmarks of Multimodal Large Language Models☆145Updated 5 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆621Updated last week
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning☆132Updated last year