xk-huang / segment-caption-anythingLinks
[CVPR'24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloading the trained model checkpoints, and example notebooks / gradio demo that show how to use the model.
☆231Updated last year
Alternatives and similar repositories for segment-caption-anything
Users that are interested in segment-caption-anything are comparing it to the libraries listed below
Sorting:
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆205Updated 10 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆331Updated last year
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"☆292Updated 5 months ago
- [ECCV2024] This is an official implementation for "PSALM: Pixelwise SegmentAtion with Large Multi-Modal Model"☆262Updated 11 months ago
- [CVPR 2024] PixelLM is an effective and efficient LMM for pixel-level reasoning and understanding.☆243Updated 9 months ago
- [ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model☆196Updated last year
- ☆194Updated 6 months ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆141Updated 11 months ago
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆263Updated last year
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆295Updated 10 months ago
- [ICCV2023] VLPart: Going Denser with Open-Vocabulary Part Segmentation☆390Updated 2 years ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆168Updated last year
- Large-Vocabulary Video Instance Segmentation dataset☆95Updated last year
- ☆114Updated last year
- ☆189Updated last year
- Recognize Any Regions☆122Updated 11 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆158Updated last year
- [ICCV 2023] RLIPv2: Fast Scaling of Relational Language-Image Pre-training☆135Updated last year
- [ICLR2025] Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆91Updated this week
- Densely Captioned Images (DCI) dataset repository.☆194Updated last year
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"☆128Updated last year
- [CVPR 2024] ViT-Lens: Towards Omni-modal Representations☆184Updated 10 months ago
- [ECCV2024] PartGLEE: A Foundation Model for Recognizing and Parsing Any Objects☆54Updated last year
- ☆100Updated last year
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆198Updated last year
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆244Updated last year
- [CVPR2024] Generative Region-Language Pretraining for Open-Ended Object Detection☆186Updated 8 months ago
- A detection/segmentation dataset with labels characterized by intricate and flexible expressions. "Described Object Detection: Liberating…☆138Updated last year
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆80Updated last year
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆158Updated 11 months ago