LINs-lab / DynMoE
[ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models
☆85Updated 2 months ago
Alternatives and similar repositories for DynMoE:
Users that are interested in DynMoE are comparing it to the libraries listed below
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆61Updated 2 months ago
- Code release for VTW (AAAI 2025) Oral☆34Updated 2 months ago
- ☆99Updated 9 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆92Updated 5 months ago
- ☆74Updated 2 weeks ago
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆27Updated 4 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆47Updated 3 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆99Updated last month
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆36Updated 9 months ago
- ☆73Updated 3 weeks ago
- ☆131Updated 8 months ago
- The official repository for the paper "Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark"☆48Updated 2 weeks ago
- Paper List of Inference/Test Time Scaling/Computing☆160Updated last week
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆123Updated 10 months ago
- ☆91Updated last month
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆111Updated this week
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆78Updated 4 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆44Updated 8 months ago
- ☆37Updated 3 weeks ago
- Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal …☆46Updated last month
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆153Updated 7 months ago
- A Self-Training Framework for Vision-Language Reasoning☆73Updated 2 months ago
- ☆41Updated 3 months ago
- ☆69Updated 10 months ago
- ☆49Updated last month
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆114Updated last month
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆89Updated last month
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆56Updated 8 months ago
- Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆31Updated 2 weeks ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆86Updated last month