tianyang-x / Mixture-of-Domain-AdaptersLinks
Codebase for ACL 2023 paper "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memories"
☆52Updated 2 years ago
Alternatives and similar repositories for Mixture-of-Domain-Adapters
Users that are interested in Mixture-of-Domain-Adapters are comparing it to the libraries listed below
Sorting:
- [ACL 2024 Oral] This is the code repo for our ACL‘24 paper "MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Mo…☆39Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆100Updated last year
- ☆24Updated last year
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆132Updated 2 years ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Updated 5 months ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- Instruct Once, Chat Consistently in Multiple Rounds: An Efficient Tuning Framework for Dialogue (ACL 2024)☆24Updated 2 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆186Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆174Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated last year
- ☆58Updated 10 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆72Updated last year
- Released code for our ICLR23 paper.☆66Updated 2 years ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated 2 years ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆138Updated last year
- ☆87Updated 2 years ago
- Official Code Repository for the paper "Knowledge-Augmented Reasoning Distillation for Small Language Models in Knowledge-intensive Tasks…☆42Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Updated last year
- ☆43Updated 2 years ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search" [EMNLP25]☆35Updated 3 months ago
- Official implementation for "Parameter-Efficient Fine-Tuning Design Spaces"☆27Updated 2 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Updated 2 years ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆33Updated 10 months ago
- ☆19Updated 2 years ago
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Updated last year
- ☆171Updated last year