TencentARC / ViT-LensLinks
[CVPR 2024] ViT-Lens: Towards Omni-modal Representations
☆176Updated 4 months ago
Alternatives and similar repositories for ViT-Lens
Users that are interested in ViT-Lens are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆125Updated 10 months ago
- [CVPR'24] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆60Updated 4 months ago
- Explore the Limits of Omni-modal Pretraining at Scale☆103Updated 9 months ago
- ☆97Updated last year
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆78Updated 3 weeks ago
- Official repository of paper "Subobject-level Image Tokenization" (ICML-25)☆72Updated 2 months ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆199Updated 5 months ago
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated last year
- Official repo for StableLLAVA☆95Updated last year
- Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders☆114Updated 2 months ago
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).☆145Updated last month
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated 11 months ago
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"☆238Updated last year
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆348Updated last month
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆111Updated 2 weeks ago
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆280Updated 5 months ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated 2 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆81Updated last year
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆206Updated last year
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆122Updated 5 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆332Updated 3 months ago
- A Comprehensive Benchmark and Toolkit for Evaluating Video-based Large Language Models!☆128Updated last year
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆224Updated 8 months ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆167Updated this week
- Official implementation of "Why are Visually-Grounded Language Models Bad at Image Classification?" (NeurIPS 2024)☆85Updated 8 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆75Updated 3 months ago
- [ACL 2024 Findings] "TempCompass: Do Video LLMs Really Understand Videos?", Yuanxin Liu, Shicheng Li, Yi Liu, Yuxiang Wang, Shuhuai Ren, …☆117Updated 2 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆169Updated last week
- Fine-tuning "ImageBind One Embedding Space to Bind Them All" with LoRA☆184Updated last year