JacobYuan7 / RLIPv2Links
[ICCV 2023] RLIPv2: Fast Scaling of Relational Language-Image Pre-training
☆135Updated last year
Alternatives and similar repositories for RLIPv2
Users that are interested in RLIPv2 are comparing it to the libraries listed below
Sorting:
- CVPR 2023 Accepted Paper HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models☆68Updated last year
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆78Updated last year
- [ICCV'23] Official PyTorch implementation for paper "Exploring Predicate Visual Context in Detecting Human-Object Interactions"☆87Updated last year
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"☆35Updated last year
- Benchmarking Panoptic Video Scene Graph Generation (PVSG), CVPR'23☆101Updated last year
- [ICLR 2024] FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition☆93Updated 11 months ago
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆66Updated 2 years ago
- Foundation Models for Video Understanding: A Survey☆141Updated 5 months ago
- A detection/segmentation dataset with labels characterized by intricate and flexible expressions. "Described Object Detection: Liberating…☆138Updated last year
- ☆121Updated last year
- Disentangled Pre-training for Human-Object Interaction Detection☆27Updated 3 months ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆151Updated 6 months ago
- [CVPR'24] The repository provides code for running inference and training for "Segment and Caption Anything" (SCA) , links for downloadin…☆232Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆260Updated 4 months ago
- [ECCV2024] Official code implementation of Merlin: Empowering Multimodal LLMs with Foresight Minds☆96Updated last year
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆66Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆100Updated last year
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆64Updated last year
- MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge (ICCV 2023)☆30Updated 2 years ago
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆77Updated 9 months ago
- ☆70Updated last year
- Large-Vocabulary Video Instance Segmentation dataset☆95Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- UniMD: Towards Unifying Moment retrieval and temporal action Detection☆55Updated last year
- [AAAI 2023 Oral] VLTinT: Visual-Linguistic Transformer-in-Transformer for Coherent Video Paragraph Captioning☆68Updated last year
- 👾 E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding (NeurIPS 2024)☆71Updated 11 months ago
- Pytorch implementation for Egoinstructor at CVPR 2024☆28Updated last year
- [ECCV 2024] Elysium: Exploring Object-level Perception in Videos via MLLM☆86Updated last year
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆31Updated last year
- ☆33Updated 2 years ago