ziqipang / LM4VisualEncoding
[ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"
☆231Updated last year
Alternatives and similar repositories for LM4VisualEncoding:
Users that are interested in LM4VisualEncoding are comparing it to the libraries listed below
- ☆308Updated last year
- Official implementation of the Law of Vision Representation in MLLMs☆149Updated 3 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆295Updated last week
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆116Updated 9 months ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆266Updated last year
- Open source implementation of "Vision Transformers Need Registers"☆163Updated 3 weeks ago
- ☆121Updated 8 months ago
- The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss". A su…☆223Updated last month
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆171Updated 11 months ago
- 🔥 Official impl. of "TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation".☆253Updated last month
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆187Updated 2 months ago
- Densely Captioned Images (DCI) dataset repository.☆168Updated 7 months ago
- [CVPR 2024] Prompt Highlighter: Interactive Control for Multi-Modal LLMs☆139Updated 6 months ago
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆225Updated last year
- ☆103Updated last week
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆306Updated 10 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆127Updated 3 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆96Updated 5 months ago
- ☆114Updated 8 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆236Updated 4 months ago
- [NeurIPS 2024] Vision Model Pre-training on Interleaved Image-Text Data via Latent Compression Learning☆68Updated last week
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆76Updated 10 months ago
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆200Updated last month
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆328Updated last month
- Official repo for "VisionZip: Longer is Better but Not Necessary in Vision Language Models"☆235Updated last month
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆50Updated last year
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆144Updated 2 years ago
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆197Updated 10 months ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆125Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆123Updated 2 months ago