Bumble666 / Hyper_MoELinks
☆32Updated 6 months ago
Alternatives and similar repositories for Hyper_MoE
Users that are interested in Hyper_MoE are comparing it to the libraries listed below
Sorting:
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆42Updated last month
- ☆149Updated last year
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆22Updated 5 months ago
- ☆78Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆175Updated last year
- ☆47Updated 8 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆35Updated 6 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆176Updated 11 months ago
- ☆183Updated last year
- ☆112Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆80Updated 9 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆81Updated last year
- my commonly-used tools☆57Updated 7 months ago
- ☆49Updated 3 weeks ago
- MMoE: Multimodal Mixture-of-Experts (EMNLP 2024)☆11Updated 8 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆110Updated last month
- The trainer for HF to record losses of different tasks and objectives.☆43Updated 5 months ago
- ☆28Updated last year
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆62Updated 3 years ago
- ☆100Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆147Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆95Updated last year
- ☆255Updated last month
- AdaMoLE: Adaptive Mixture of LoRA Experts☆34Updated 10 months ago
- ☆118Updated 5 months ago
- Official Repository of "Learning what reinforcement learning can't"☆54Updated last week
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆64Updated 3 weeks ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆220Updated 8 months ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆144Updated last year