Artanic30 / HOICLIPLinks
CVPR 2023 Accepted Paper HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models
☆68Updated last year
Alternatives and similar repositories for HOICLIP
Users that are interested in HOICLIP are comparing it to the libraries listed below
Sorting:
- The official repository for ICLR2024 paper "FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition"☆87Updated 7 months ago
- [ICCV'23] Official PyTorch implementation for paper "Exploring Predicate Visual Context in Detecting Human-Object Interactions"☆84Updated last year
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆77Updated last year
- ☆117Updated last year
- ☆40Updated last year
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆123Updated 2 years ago
- SeqTR: A Simple yet Universal Network for Visual Grounding☆141Updated 10 months ago
- Code for our paper "Category Query Learning for Human-Object Interaction Classification" (CVPR2023)☆37Updated 2 years ago
- ECCV2022 Towards Hard-Positive Query Mining for DETR-based Human-Object Interaction Detection☆27Updated 2 years ago
- 【CVPR'24】OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition☆38Updated last year
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆41Updated last year
- [CVPR2023] Code Release of Aligning Bag of Regions for Open-Vocabulary Object Detection☆182Updated last year
- The official code for Relational Context Learning for Human-Object Interaction Detection, CVPR2023.☆51Updated 2 years ago
- ☆30Updated 2 years ago
- Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"☆88Updated last year
- UniMD: Towards Unifying Moment retrieval and temporal action Detection☆51Updated last year
- [ICCV 2023] RLIPv2: Fast Scaling of Relational Language-Image Pre-training☆133Updated last year
- Code for our IJCV 2023 paper "CLIP-guided Prototype Modulating for Few-shot Action Recognition".☆68Updated last year
- Disentangled Pre-training for Human-Object Interaction Detection☆25Updated 2 months ago
- Official code of ACM MM2024 paper- Unseen No More: Unlocking the Potential of CLIP for Generative Zero-shot HOI Detection☆23Updated last year
- ☆82Updated 2 years ago
- Referring Video Object Segmentation / Multi-Object Tracking Repo☆88Updated 2 years ago
- Pytorch Code for "Unified Coarse-to-Fine Alignment for Video-Text Retrieval" (ICCV 2023)☆66Updated last year
- A lightweight codebase for referring expression comprehension and segmentation☆55Updated 3 years ago
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆130Updated 2 weeks ago
- [CVPR2024 Highlight] Official repository of the paper "The devil is in the fine-grained details: Evaluating open-vocabulary object detect…☆58Updated 4 months ago
- Improving Visual Grounding with Visual-Linguistic Verification and Iterative Reasoning, CVPR 2022☆95Updated 2 years ago
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆193Updated last year
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"☆34Updated last year
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆56Updated last year