TencentARC / ViT-LensLinks
[CVPR 2024] ViT-Lens: Towards Omni-modal Representations
☆183Updated 9 months ago
Alternatives and similar repositories for ViT-Lens
Users that are interested in ViT-Lens are comparing it to the libraries listed below
Sorting:
- ☆99Updated last year
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆127Updated last year
- [CVPR 2024] Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities☆99Updated last year
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated last year
- Official repo for StableLLAVA☆94Updated last year
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆61Updated 8 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆94Updated last year
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆93Updated 8 months ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆205Updated 10 months ago
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆230Updated last year
- Official repository of paper "Subobject-level Image Tokenization" (ICML-25)☆88Updated 4 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆118Updated last year
- [ICLR 2024 (Spotlight)] "Frozen Transformers in Language Models are Effective Visual Encoder Layers"