dvlab-research / LISALinks
Project Page for "LISA: Reasoning Segmentation via Large Language Model"
☆2,510Updated 9 months ago
Alternatives and similar repositories for LISA
Users that are interested in LISA are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"☆2,782Updated 5 months ago
- [CVPR2024 Highlight]GLEE: General Object Foundation Model for Images and Videos at Scale☆1,165Updated last year
- Personalize Segment Anything Model (SAM) with 1 shot in 10 seconds☆1,635Updated last year
- (TPAMI 2024) A Survey on Open Vocabulary Learning☆969Updated 8 months ago
- VisionLLM Series☆1,130Updated 9 months ago
- Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).☆2,296Updated 2 years ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆855Updated 4 months ago
- [CVPR2024] The code for "Osprey: Pixel Understanding with Visual Instruction Tuning"☆836Updated 3 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆929Updated 4 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,751Updated last year
- Grounded Language-Image Pre-training☆2,550Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,614Updated last year
- Adapting Meta AI's Segment Anything to Downstream Tasks with Adapters and Prompts☆1,360Updated last week
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,585Updated last year
- Open-source and strong foundation image recognition models.☆3,498Updated 9 months ago
- EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything☆2,446Updated 11 months ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,351Updated last year
- An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary alg…☆3,084Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,760Updated last year
- ☆1,838Updated last year
- Collection of AWESOME vision-language models for vision tasks☆3,024Updated last month
- [ECCV 2024] The official code of paper "Open-Vocabulary SAM".☆1,021Updated 4 months ago
- [CVPR 2023] Official Implementation of X-Decoder for generalized decoding for pixel, image and language☆1,337Updated 2 years ago
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆852Updated last year
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model To…☆1,061Updated last year
- Meta-Transformer for Unified Multimodal Learning☆1,644Updated 2 years ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,088Updated 3 weeks ago
- Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]☆929Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,125Updated last week
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,064Updated 10 months ago