TeleeMa / SADELinks
An Examination of the Compositionality of Large Generative Vision-Language Models
β19Updated last year
Alternatives and similar repositories for SADE
Users that are interested in SADE are comparing it to the libraries listed below
Sorting:
- πΎ E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)β71Updated 11 months ago
- For Ego4D VQ3D Taskβ22Updated last year
- [CVPR 2024] Data and benchmark code for the EgoExoLearn datasetβ76Updated 3 months ago
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentationβ92Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasksβ58Updated last year
- [ECCV 2024] OpenPSG: Open-set Panoptic Scene Graph Generation via Large Multimodal Modelsβ49Updated 11 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)β44Updated last year
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022β70Updated last year
- [IJCV 2025] VLPrompt-PSG: Vision-Language Prompting for Panoptic Scene Graph Generationβ28Updated last year
- Code for the paper "Detecting Any Human-Object Interaction Relationship: Universal HOI Detector with Spatial Prompt Learning on Foundatioβ¦β28Updated 2 years ago
- β26Updated 8 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?β20Updated last year
- VisualGPTScore for visio-linguistic reasoningβ27Updated 2 years ago
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignmentβ30Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024β133Updated 7 months ago
- Egocentric Video Understanding Dataset (EVUD)β32Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)β35Updated last year
- β24Updated 2 years ago
- A collection of 3D vision and language (e.g., 3D Visual Grounding, 3D Question Answering and 3D Dense Caption) papers and datasets.β100Updated 2 years ago
- OpenScan: A Benchmark for Generalized Open-Vocabulary 3D Scene Understandingβ19Updated 2 weeks ago
- [ECCV2022] A PyTorch implementation of the paper "Spatial and Visual Perspective-Taking via View Rotation and Relation Reasoning for Emboβ¦β13Updated 2 years ago
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generationβ¦β39Updated 9 months ago
- CVPR2025: Benchmarking Large Vision-Language Models via Directed Scene Graph for Comprehensive Image Captioningβ37Updated 8 months ago
- [CVPR'25] ππ EgoTextVQA: Towards Egocentric Scene-Text Aware Video Question Answeringβ43Updated 6 months ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)β52Updated last year
- [CVPR2022 Oral] 3DJCG: A Unified Framework for Joint Dense Captioning and Visual Grounding on 3D Point Cloudsβ56Updated 2 years ago
- β63Updated 2 years ago
- Official Implementation of Frequency-enhanced Data Augmentation for Vision-and-Language Navigation (NeurIPS2023)β14Updated last year
- This is the official impletations of the EMNLP Findings paper, VideoINSTA: Zero-shot Long-Form Video Understanding via Informative Spatiaβ¦β24Updated last year
- [NeurIPS 2024] Official code for paper "EZ-HOI: VLM Adaptation via Guided Prompt Learning for Zero-Shot HOI Detection"β41Updated 5 months ago