FoundationVision / UniRef
[ICCV2023] Segment Every Reference Object in Spatial and Temporal Spaces
☆235Updated 10 months ago
Related projects ⓘ
Alternatives and complementary repositories for UniRef
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"☆312Updated this week
- VCoder: Versatile Vision Encoders for Multimodal Large Language Models, arXiv 2023 / CVPR 2024☆261Updated 7 months ago
- ☆148Updated 2 months ago
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆202Updated last month
- Grounded Segment Anything: From Objects to Parts☆388Updated last year
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"☆271Updated 8 months ago
- ☆211Updated 4 months ago
- [NeurIPS 2023] This repo contains the code for our paper Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convoluti…☆287Updated 9 months ago
- Multimodal Models in Real World☆403Updated 3 weeks ago
- [ICCV2023] VLPart: Going Denser with Open-Vocabulary Part Segmentation☆357Updated last year
- ☆146Updated last month
- Data release for the ImageInWords (IIW) paper.☆200Updated this week
- Codebase for the Recognize Anything Model (RAM)☆64Updated 11 months ago
- [IJCV 2024] MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation☆112Updated last month
- Image Editing Anything☆112Updated last year
- [NeurIPS 2024] SlimSAM: 0.1% Data Makes Segment Anything Slim☆303Updated this week
- [ECCV 2024] Official implementation of the paper "TAPTR: Tracking Any Point with Transformers as Detection"☆200Updated 3 months ago
- Recognize Any Regions☆118Updated last month
- PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding. PixelLM is accepted by CVPR 2024.☆181Updated 5 months ago
- [NeurIPS 2023] Customize spatial layouts for conditional image synthesis models, e.g., ControlNet, using GPT☆132Updated 6 months ago
- Image Prompter for Gradio☆74Updated 11 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆92Updated 3 months ago
- [ECCV 2024] DragAnything: Motion Control for Anything using Entity Representation☆432Updated 4 months ago
- Combining "segment-anything" with MOT, it create the era of "MOTS"☆146Updated last year
- Official Code for Tracking Any Object Amodally☆113Updated 4 months ago
- ☆81Updated last year
- Official implementation of the ECCV paper "SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing"☆232Updated last month
- InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions☆126Updated 9 months ago
- ☆145Updated 2 months ago
- ☆156Updated last year