Fsoft-AIC / LibMoELinks
LibMoE: A LIBRARY FOR COMPREHENSIVE BENCHMARKING MIXTURE OF EXPERTS IN LARGE LANGUAGE MODELS
☆43Updated last week
Alternatives and similar repositories for LibMoE
Users that are interested in LibMoE are comparing it to the libraries listed below
Sorting:
- Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation☆35Updated last year
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆71Updated 8 months ago
- ☆196Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated 11 months ago
- [Technical Report] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with …☆61Updated last year
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆48Updated last year
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆137Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆94Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆179Updated 7 months ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆56Updated last year
- ☆70Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated last year
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆41Updated 6 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆46Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆133Updated 7 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆109Updated this week
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆132Updated 4 months ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆37Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆195Updated last year
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆143Updated 4 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆88Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- ☆179Updated 5 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆168Updated last month
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆54Updated 5 months ago
- Code for Heima☆56Updated 6 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆83Updated last year
- Awesome Low-Rank Adaptation☆52Updated 3 months ago
- ☆28Updated last year