tianyang-x / Mixture-of-Domain-AdaptersLinks
Codebase for ACL 2023 paper "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memories"
☆51Updated 2 years ago
Alternatives and similar repositories for Mixture-of-Domain-Adapters
Users that are interested in Mixture-of-Domain-Adapters are comparing it to the libraries listed below
Sorting:
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆131Updated 2 years ago
- [ACL 2024 Oral] This is the code repo for our ACL‘24 paper "MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Mo…☆38Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆97Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆184Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆44Updated 4 months ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated 11 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆166Updated last year
- ☆24Updated last year
- ☆54Updated 8 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆61Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- Paper list and datasets for the paper: A Survey on Data Selection for LLM Instruction Tuning☆46Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆61Updated 10 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆135Updated 11 months ago
- Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue (ACL 2024)☆24Updated last week
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆55Updated 6 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆72Updated last year
- Released code for our ICLR23 paper.☆66Updated 2 years ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆147Updated last year
- ☆161Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- [NAACL 2024] MMC: Advancing Multimodal Chart Understanding with LLM Instruction Tuning☆96Updated 9 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆165Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Updated 11 months ago