FoundationVision / UniRefLinks
[ICCV2023] Segment Every Reference Object in Spatial and Temporal Spaces
☆239Updated 5 months ago
Alternatives and similar repositories for UniRef
Users that are interested in UniRef are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] VCoder: Versatile Vision Encoders for Multimodal Large Language Models☆278Updated last year
- Grounded Segment Anything: From Objects to Parts☆410Updated 2 years ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"☆429Updated 3 months ago
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆226Updated 9 months ago
- Codebase for the Recognize Anything Model (RAM)☆81Updated last year
- ☆186Updated last month
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"☆287Updated 3 weeks ago
- This method uses Segment Anything and CLIP to ground and count any object that matches a custom text prompt, without requiring any point …☆164Updated 2 years ago
- [ICCV2023] VLPart: Going Denser with Open-Vocabulary Part Segmentation☆380Updated last year
- Official Code for Tracking Any Object Amodally☆118Updated last year
- Image Editing Anything☆116Updated 2 years ago
- Relate Anything Model is capable of taking an image as input and utilizing SAM to identify the corresponding mask within the image.☆456Updated 2 years ago
- Image Prompter for Gradio☆92Updated last year
- [ICCV2025] Referring any person or objects given a natural language description. Code base for RexSeek and HumanRef Benchmark☆138Updated 3 months ago
- ☆179Updated 8 months ago
- This is an implementation of zero-shot instance segmentation using Segment Anything.☆310Updated 2 years ago
- Combining "segment-anything" with MOT, it create the era of "MOTS"☆156Updated 2 years ago
- LLaVA-Interactive-Demo☆374Updated 11 months ago
- [IJCV 2024] MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instance Segmentation☆123Updated 9 months ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆198Updated 6 months ago
- Recognize Any Regions☆122Updated 6 months ago
- A Graph-Based Approach for Category-Agnostic Pose Estimation [ECCV 2024]☆367Updated 7 months ago
- ZIM: Zero-Shot Image Matting for Anything☆294Updated 7 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆193Updated 5 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆125Updated 11 months ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆229Updated 5 months ago
- [NeurIPS 2024] SlimSAM: 0.1% Data Makes Segment Anything Slim☆334Updated 4 months ago
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆355Updated 10 months ago
- [ECCV 2024] Tokenize Anything via Prompting☆585Updated 7 months ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆481Updated last year