amazon-science / peft-design-spaces
Official implementation for "Parameter-Efficient Fine-Tuning Design Spaces"
☆26Updated 2 years ago
Alternatives and similar repositories for peft-design-spaces:
Users that are interested in peft-design-spaces are comparing it to the libraries listed below
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆29Updated last year
- ☆45Updated 4 months ago
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆73Updated 2 months ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆43Updated 7 months ago
- Released code for our ICLR23 paper.☆63Updated last year
- [2024-ACL]: TextBind: Multi-turn Interleaved Multimodal Instruction-following in the Wildrounded Conversation☆47Updated last year
- The released data for paper "Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models".☆32Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆156Updated 7 months ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated last year
- Touchstone: Evaluating Vision-Language Models by Language Models☆81Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆38Updated last year
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆61Updated 3 months ago
- ☆60Updated 2 years ago
- [ACL 2023] Code for paper “Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation”(https://arxiv.org/abs/2305.…☆38Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆28Updated last month
- ☆73Updated 2 years ago
- ☆22Updated 5 months ago
- [NeurIPS 2024] A comprehensive benchmark for evaluating critique ability of LLMs☆38Updated 2 months ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆63Updated last year
- The code and data for the paper JiuZhang3.0☆40Updated 8 months ago
- ☆21Updated 3 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆33Updated 9 months ago
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated 10 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆34Updated 9 months ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆59Updated 2 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆77Updated 11 months ago
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆44Updated last month
- ☆95Updated last year
- [ACL 2024] This is the code repo for our ACL‘24 paper "MARVEL: Unlocking the Multi-Modal Capability of Dense Retrieval via Visual Module …☆36Updated 7 months ago