Artanic30 / HOICLIP
CVPR 2023 Accepted Paper HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models
☆62Updated 11 months ago
Alternatives and similar repositories for HOICLIP:
Users that are interested in HOICLIP are comparing it to the libraries listed below
- [ICCV'23] Official PyTorch implementation for paper "Exploring Predicate Visual Context in Detecting Human-Object Interactions"☆73Updated 8 months ago
- Code for our paper "Category Query Learning for Human-Object Interaction Classification" (CVPR2023)☆35Updated last year
- The official repository for ICLR2024 paper "FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition"☆70Updated last month
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆73Updated 9 months ago
- The official code for Relational Context Learning for Human-Object Interaction Detection, CVPR2023.☆48Updated last year
- Code for our CVPR 2022 Paper "GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection"☆84Updated 11 months ago
- [AAAI2023] Repo for the paper ''End-to-End Zero-Shot HOI Detection via Vision and Language Knowledge Distillation''.☆22Updated last year
- ☆22Updated 2 years ago
- ☆23Updated 8 months ago
- ☆14Updated 9 months ago
- ECCV2022 Towards Hard-Positive Query Mining for DETR-based Human-Object Interaction Detection☆27Updated last year
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆40Updated last year
- Official code of ACM MM2024 paper- Unseen No More: Unlocking the Potential of CLIP for Generative Zero-shot HOI Detection☆18Updated 6 months ago
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆57Updated last year
- Utilities for the human-object interaction detection dataset HICO-DET☆56Updated last year
- ☆39Updated 11 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆51Updated 8 months ago
- Code for ECCV2022 Paper "Mining Cross-Person Cues for Body-Part Interactiveness Learning in HOI Detection"☆36Updated 2 years ago
- ☆28Updated last year
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆115Updated last year
- ☆31Updated 2 years ago
- [CVPR 2024] Official PyTorch implementation of the paper "One For All: Video Conversation is Feasible Without Video Instruction Tuning"☆31Updated last year
- SeqTR: A Simple yet Universal Network for Visual Grounding☆132Updated 4 months ago
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated 4 months ago
- ☆34Updated last year
- Code for our IJCV 2023 paper "CLIP-guided Prototype Modulating for Few-shot Action Recognition".☆61Updated last year
- [ Arxiv 2023 ] This repository contains the code for "MUPPET: Multi-Modal Few-Shot Temporal Action Detection"☆15Updated last year
- ☆76Updated last year
- Code for the paper "Detecting Any Human-Object Interaction Relationship: Universal HOI Detector with Spatial Prompt Learning on Foundatio…☆27Updated last year
- ☆47Updated 2 years ago