ylsung / Ladder-Side-TuningLinks
PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"
☆238Updated 2 years ago
Alternatives and similar repositories for Ladder-Side-Tuning
Users that are interested in Ladder-Side-Tuning are comparing it to the libraries listed below
Sorting:
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆537Updated 3 years ago
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆186Updated last year
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆207Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆294Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆165Updated last year
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆235Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆164Updated last year
- ☆158Updated 4 years ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆286Updated 2 years ago
- ☆118Updated 2 years ago
- MixGen: A New Multi-Modal Data Augmentation☆127Updated 2 years ago
- Toolkit for Elevater Benchmark☆75Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆407Updated last year
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆161Updated 3 years ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆286Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆224Updated last month
- ☆186Updated last year
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆57Updated last year
- Open source code for AAAI 2023 Paper "BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning"☆166Updated 2 years ago
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆287Updated last year
- METER: A Multimodal End-to-end TransformER Framework☆373Updated 2 years ago
- Official repository for the A-OKVQA dataset☆99Updated last year
- [ACM Multimedia 2025] This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual…☆82Updated 7 months ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆91Updated 2 years ago