UX-Decoder / FIND
[NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"
☆124Updated 8 months ago
Alternatives and similar repositories for FIND:
Users that are interested in FIND are comparing it to the libraries listed below
- [ICML 2024] This repository includes the official implementation of our paper "Rejuvenating image-GPT as Strong Visual Representation Lea…☆98Updated last year
- Official repository of paper "Subobject-level Image Tokenization"☆70Updated last month
- This repo contains the code for our paper Towards Open-Ended Visual Recognition with Large Language Model☆95Updated 9 months ago
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆119Updated last year
- Task Preference Optimization: Improving Multimodal Large Language Models with Vision Task Alignment☆50Updated 4 months ago
- [CVPR24] Official Implementation of GEM (Grounding Everything Module)☆121Updated last month
- Object Recognition as Next Token Prediction (CVPR 2024 Highlight)☆176Updated last week
- An open source implementation of CLIP (With TULIP Support)☆136Updated last month
- ☆105Updated 10 months ago
- Official Pytorch Implementation of Paper "A Semantic Space is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Des…☆55Updated 10 months ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆198Updated 4 months ago
- Large-Vocabulary Video Instance Segmentation dataset☆86Updated 10 months ago
- Code for "Scaling Language-Free Visual Representation Learning" paper (Web-SSL).☆108Updated last week
- Project for "LaSagnA: Language-based Segmentation Assistant for Complex Queries".☆56Updated last year
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆99Updated last month
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆116Updated 4 months ago
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆72Updated 3 months ago
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆37Updated 10 months ago
- [IJCV 2024] MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation☆122Updated 7 months ago
- ☆97Updated 11 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆145Updated 5 months ago
- [ICML 2025] This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆129Updated 10 months ago
- 1-shot image segmentation using Stable Diffusion☆138Updated last year
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated 7 months ago
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆175Updated 3 months ago
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆89Updated last month
- (ICLR 2024, CVPR 2024) SparseFormer☆74Updated 6 months ago
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆244Updated 6 months ago
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆89Updated 3 weeks ago
- ☆61Updated last year