UX-Decoder / FIND
[NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"
☆123Updated 7 months ago
Alternatives and similar repositories for FIND:
Users that are interested in FIND are comparing it to the libraries listed below
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated 11 months ago
- Official repository of paper "Subobject-level Image Tokenization"☆69Updated 2 weeks ago
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆48Updated 3 months ago
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆37Updated 10 months ago
- ☆105Updated 10 months ago
- Official Pytorch Implementation of Paper "A Semantic Space is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Des…☆55Updated 9 months ago
- This repo contains the code for our paper Towards Open-Ended Visual Recognition with Large Language Model☆95Updated 9 months ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆197Updated 3 months ago
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆114Updated last week
- [IJCV 2024] MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation☆121Updated 6 months ago
- ☆61Updated last year
- Object Recognition as Next Token Prediction (CVPR 2024 Highlight)☆175Updated 3 months ago
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆85Updated 3 weeks ago
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆174Updated 2 months ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆98Updated 3 weeks ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated 7 months ago
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆88Updated last week
- ☆45Updated 3 months ago
- (ICLR 2024, CVPR 2024) SparseFormer☆73Updated 5 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆121Updated this week
- Large-Vocabulary Video Instance Segmentation dataset☆84Updated 9 months ago
- ☆28Updated 3 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆70Updated 2 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated 7 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆118Updated 9 months ago
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆61Updated 2 weeks ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆69Updated last month
- ☆115Updated 8 months ago
- [ECCV 2024] Official Release of SILC: Improving vision language pretraining with self-distillation☆42Updated 6 months ago
- [AAAI2025] ChatterBox: Multi-round Multimodal Referring and Grounding, Multimodal, Multi-round dialogues☆53Updated 4 months ago