hustvl / EVF-SAM
Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"
☆312Updated this week
Related projects ⓘ
Alternatives and complementary repositories for EVF-SAM
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆312Updated 2 months ago
- [NeurIPS 2024] SlimSAM: 0.1% Data Makes Segment Anything Slim☆303Updated this week
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"☆420Updated last month
- Diffusion Feedback Helps CLIP See Better☆215Updated 2 months ago
- [ICCV2023] Segment Every Reference Object in Spatial and Temporal Spaces☆235Updated 10 months ago
- ☆148Updated 2 months ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆393Updated 7 months ago
- VCoder: Versatile Vision Encoders for Multimodal Large Language Models, arXiv 2023 / CVPR 2024☆261Updated 7 months ago
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆202Updated last month
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆242Updated this week
- Official implementation of OV-DINO: Unified Open-Vocabulary Detection with Language-Aware Selective Fusion☆250Updated 2 months ago
- [ECCV 2024] Tokenize Anything via Prompting☆534Updated 4 months ago
- [ECCV2024] This is an official implementation for "PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model"☆193Updated this week
- Data release for the ImageInWords (IIW) paper.☆200Updated this week
- Multimodal Models in Real World☆403Updated 3 weeks ago
- [CVPR 2024] Code release for "InstanceDiffusion: Instance-level Control for Image Generation"☆508Updated 4 months ago
- Quick exploration into fine tuning florence 2☆271Updated 2 months ago
- Official Implementation of "Lumina-mGPT: Illuminate Flexible Photorealistic Text-to-Image Generation with Multimodal Generative Pretraini…☆500Updated 3 months ago
- API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆778Updated 3 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆676Updated 3 months ago
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"☆271Updated 7 months ago
- Official repository for the paper PLLaVA☆593Updated 3 months ago
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆214Updated 3 weeks ago
- ☆211Updated 4 months ago
- [ECCV2024 Oral🔥] Official Implementation of "GiT: Towards Generalist Vision Transformer through Universal Language Interface"☆308Updated last month
- PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding. PixelLM is accepted by CVPR 2024.☆181Updated 5 months ago
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆212Updated 3 months ago
- When do we not need larger vision models?☆336Updated this week
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆92Updated 3 months ago
- [ECCV 2024] Official implementation of the paper "TAPTR: Tracking Any Point with Transformers as Detection"☆200Updated 3 months ago