JacobYuan7 / RLIPv2Links
[ICCV 2023] RLIPv2: Fast Scaling of Relational Language-Image Pre-training
☆135Updated last year
Alternatives and similar repositories for RLIPv2
Users that are interested in RLIPv2 are comparing it to the libraries listed below
Sorting:
- CVPR 2023 Accepted Paper HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models☆68Updated last year
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆78Updated last year
- [ICCV'23] Official PyTorch implementation for paper "Exploring Predicate Visual Context in Detecting Human-Object Interactions"☆85Updated last year
- Benchmarking Panoptic Video Scene Graph Generation (PVSG), CVPR'23☆99Updated last year
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆64Updated 2 years ago
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"☆35Updated last year
- Disentangled Pre-training for Human-Object Interaction Detection☆26Updated 2 months ago
- [CVPR 24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆230Updated last year
- Large-Vocabulary Video Instance Segmentation dataset☆95Updated last year
- A detection/segmentation dataset with labels characterized by intricate and flexible expressions. "Described Object Detection: Liberating…☆138Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆100Updated last year
- The official code for Relational Context Learning for Human-Object Interaction Detection, CVPR2023.☆52Updated 2 years ago
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆66Updated last year
- ☆119Updated last year
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆77Updated 7 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆61Updated last year
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆95Updated last year
- Foundation Models for Video Understanding: A Survey☆141Updated 4 months ago
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆70Updated 10 months ago
- [ICLR 2024] FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition☆90Updated 10 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- ☆33Updated 2 years ago
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆31Updated 11 months ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆132Updated 6 months ago
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆86Updated last year
- [ECCV 2024] ControlCap: Controllable Region-level Captioning☆79Updated last year
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆115Updated 11 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆259Updated 3 months ago
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆95Updated 10 months ago
- [ICCV 2023] Official implementation of Memory-and-Anticipation Transformer for Online Action Understanding☆49Updated 2 years ago