QC-LY / UniBindLinks
The source code for "UniBind: LLM-Augmented Unified and Balanced Representation Space to Bind Them All"
☆46Updated last year
Alternatives and similar repositories for UniBind
Users that are interested in UniBind are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆72Updated 4 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalization☆69Updated last month
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆192Updated 2 months ago
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆243Updated last year
- [ICCV 2025] ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Models☆36Updated 2 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models☆86Updated last year
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆28Updated last month
- ☆22Updated 4 months ago
- [CVPR'25] 🌟🌟 EgoTextVQA: Towards Egocentric Scene-Text Aware Video Question Answering☆37Updated 2 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selection☆115Updated last month
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆83Updated last year
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆181Updated 7 months ago
- Official repo for "Streaming Video Understanding and Multi-round Interaction with Memory-enhanced Knowledge" ICLR2025☆73Updated 6 months ago
- MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer☆46Updated last year
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆61Updated 2 weeks ago
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆128Updated last month
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆76Updated this week
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆102Updated 2 weeks ago
- [NeurIPS 2024] Visual Perception by Large Language Model’s Weights☆45Updated 5 months ago
- Official implement of MIA-DPO☆65Updated 7 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language models☆69Updated 2 months ago
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆91Updated 10 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆100Updated 3 weeks ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆86Updated last year
- [ICCV25 Oral] Token Activation Map to Visually Explain Multimodal LLMs☆73Updated last month
- Code for the paper "Compositional Entailment Learning for Hyperbolic Vision-Language Models".☆81Updated 3 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 2 weeks ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆189Updated 2 months ago
- Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆163Updated 6 months ago