mbzuai-oryx / VideoGLaMM
[CVPR 2025 π₯]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos
β52Updated this week
Alternatives and similar repositories for VideoGLaMM:
Users that are interested in VideoGLaMM are comparing it to the libraries listed below
- Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite for Video-LMMs".β45Updated 7 months ago
- Composed Video Retrievalβ53Updated 11 months ago
- Official implementation of "InstructSeg: Unifying Instructed Visual Segmentation with Multi-modal Large Language Models"β33Updated last month
- [NeurIPS 2024] Official PyTorch implementation of LoTLIP: Improving Language-Image Pre-training for Long Text Understandingβ43Updated 2 months ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Modelsβ80Updated 7 months ago
- cliptraseβ34Updated 7 months ago
- The official repository for paper "PruneVid: Visual Token Pruning for Efficient Video Large Language Models".β35Updated last month
- This repo holds the official code and data for "Unveiling Parts Beyond Objects: Towards Finer-Granularity Referring Expression Segmentatiβ¦β64Updated 10 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"β33Updated last year
- β29Updated 6 months ago
- [CVPR 2025] RAP: Retrieval-Augmented Personalizationβ33Updated last week
- [ECCV 2024] ControlCap: Controllable Region-level Captioningβ73Updated 5 months ago
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalizationβ103Updated last year
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentationβ79Updated last week
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Modelsβ45Updated 8 months ago
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videosβ111Updated 3 months ago
- [CVPR 2025] Mitigating Object Hallucinations in Large Vision-Language Models with Assembly of Global and Local Attentionβ28Updated 8 months ago
- β16Updated last year
- β40Updated 6 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understandingβ42Updated last week
- FreeVA: Offline MLLM as Training-Free Video Assistantβ57Updated 9 months ago
- [ICLR2025] Text4Seg: Reimagining Image Segmentation as Text Generationβ72Updated this week
- Code release for "SegLLM: Multi-round Reasoning Segmentation"β70Updated last month
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervisionβ38Updated last week
- [NAACL'25] Contains code and documentation for our VANE-Bench paper.β11Updated last week
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captionsβ127Updated 4 months ago
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inferenceβ79Updated last week
- [CVPR 2025] LLaVA-ST: A Multimodal Large Language Model for Fine-Grained Spatial-Temporal Understandingβ38Updated last month
- β29Updated 2 weeks ago
- Official PyTorch code of GroundVQA (CVPR'24)β58Updated 6 months ago