ziqipang / LM4VisualEncodingLinks
[ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"
☆241Updated last year
Alternatives and similar repositories for LM4VisualEncoding
Users that are interested in LM4VisualEncoding are comparing it to the libraries listed below
Sorting:
- ☆344Updated last year
- Official implementation of the Law of Vision Representation in MLLMs☆163Updated 8 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆149Updated last year
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆233Updated last year
- [ICLR'25] Reconstructive Visual Instruction Tuning☆101Updated 3 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆144Updated 8 months ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆131Updated last year
- MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning☆128Updated last year
- Densely Captioned Images (DCI) dataset repository.☆187Updated last year
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆142Updated last month
- ☆135Updated last year
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆87Updated 9 months ago
- ☆99Updated last year
- Open source implementation of "Vision Transformers Need Registers"☆184Updated 2 weeks ago
- [ICCV2023] Dataset Quantization☆259Updated last year
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆189Updated 10 months ago
- ☆118Updated last year
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆180Updated last month
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆283Updated last year
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated 10 months ago
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆178Updated 6 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆342Updated last week
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆69Updated 5 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆316Updated last year
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆82Updated last year
- SVIT: Scaling up Visual Instruction Tuning☆163Updated last year
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆58Updated 2 years ago
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆184Updated last year
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆211Updated 4 months ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆52Updated last year