LiaoMengqi / HMoRALinks
[ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts
☆24Updated 3 months ago
Alternatives and similar repositories for HMoRA
Users that are interested in HMoRA are comparing it to the libraries listed below
Sorting:
- [ICML2025] Test-Time Learning for Large Language Models☆39Updated 4 months ago
- ☆125Updated last year
- Official implementation of "MMNeuron: Discovering Neuron-Level Domain-Specific Interpretation in Multimodal Large Language Model". Our co…☆25Updated last year
- ☆28Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆101Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆38Updated 11 months ago
- ☆36Updated 11 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆74Updated 10 months ago
- MokA: Multimodal Low-Rank Adaptation for MLLMs☆62Updated last week
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆201Updated last year
- Source code of EMNLP 2022 Findings paper "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"☆22Updated last year
- AdaMoLE: Adaptive Mixture of LoRA Experts☆38Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆40Updated last year
- ☆28Updated last year
- EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [🔥The Exploration of R1 for General Audio-Vi…☆70Updated 7 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆33Updated 10 months ago
- [NeurIPS 2024] For paper Parameter Competition Balancing for Model Merging☆48Updated last year
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆74Updated 7 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Updated 6 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆58Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆142Updated 9 months ago
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 10 months ago
- ☆173Updated last year
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆180Updated 3 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆84Updated last year
- ☆192Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆131Updated 10 months ago
- Code for Reducing Hallucinations in Vision-Language Models via Latent Space Steering☆99Updated last year
- Official code for our paper "Model Composition for Multimodal Large Language Models" (ACL 2024)☆31Updated last year
- ☆138Updated 9 months ago