ziqipang / LM4VisualEncodingLinks
[ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"
☆238Updated last year
Alternatives and similar repositories for LM4VisualEncoding
Users that are interested in LM4VisualEncoding are comparing it to the libraries listed below
Sorting:
- ☆341Updated last year
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning☆126Updated last year
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆147Updated 11 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆160Updated 7 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆185Updated 9 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆139Updated 2 weeks ago
- [ICLR'25] Reconstructive Visual Instruction Tuning☆97Updated 3 months ago
- [CVPR'24] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆85Updated 8 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆143Updated 8 months ago
- ☆134Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆282Updated last year
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆128Updated last year
- Densely Captioned Images (DCI) dataset repository.☆186Updated last year
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆233Updated last year
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆176Updated 5 months ago
- ☆69Updated 11 months ago
- [ECCV 2024🔥] Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"☆149Updated 10 months ago
- Open source implementation of "Vision Transformers Need Registers"☆185Updated 3 months ago
- ☆118Updated last year
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆316Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆129Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆342Updated 6 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆325Updated 11 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆176Updated 3 weeks ago
- ☆98Updated last year
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆333Updated this week
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated 10 months ago
- Video Chain of Thought, Codes for ICML 2024 paper: "Video-of-Thought: Step-by-Step Video Reasoning from Perception to Cognition"☆151Updated 4 months ago
- SVIT: Scaling up Visual Instruction Tuning☆163Updated last year