hustvl / EVF-SAM
Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"
☆366Updated last month
Alternatives and similar repositories for EVF-SAM:
Users that are interested in EVF-SAM are comparing it to the libraries listed below
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"☆448Updated 4 months ago
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆336Updated 6 months ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆439Updated 10 months ago
- [ICCV2023] Segment Every Reference Object in Spatial and Temporal Spaces☆235Updated 2 weeks ago
- ☆227Updated 2 weeks ago
- [ECCV 2024 & NeurIPS 2024] Official implementation of the paper TAPTR & TAPTRv2 & TAPTRv3☆256Updated 2 months ago
- Official implementation of OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion☆291Updated 3 weeks ago
- ZIM: Zero-Shot Image Matting for Anything☆258Updated 3 months ago
- [ECCV2024] This is an official implementation for "PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model"☆224Updated 2 months ago
- ☆175Updated 3 weeks ago
- Quick exploration into fine tuning florence 2☆302Updated 5 months ago
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆262Updated last month
- Pytorch Implementation of "SMITE: Segment Me In TimE" (ICLR 2025)☆202Updated last week
- [ECCV 2024] Tokenize Anything via Prompting☆562Updated 2 months ago
- [NeurIPS 2024] SlimSAM: 0.1% Data Makes Segment Anything Slim☆324Updated last week
- Official PyTorch implementation of ECCV 2024 Paper: ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback.☆479Updated last month
- Efficient Track Anything☆485Updated last month
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆905Updated last month
- LLaVA-Interactive-Demo☆363Updated 7 months ago
- [NeurIPS 2023] This repo contains the code for our paper Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convoluti…☆306Updated last year
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆883Updated last month
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆156Updated last month
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆212Updated 5 months ago
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆273Updated 10 months ago
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraini…☆544Updated 6 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆761Updated 6 months ago
- [ICLR 2025] ControlAR: Controllable Image Generation with Autoregressive Models☆197Updated last month
- PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding. PixelLM is accepted by CVPR 2024.☆208Updated 2 weeks ago
- When do we not need larger vision models?☆369Updated 3 weeks ago