fredzzhang / pvicLinks
[ICCV'23] Official PyTorch implementation for paper "Exploring Predicate Visual Context in Detecting Human-Object Interactions"
☆86Updated last year
Alternatives and similar repositories for pvic
Users that are interested in pvic are comparing it to the libraries listed below
Sorting:
- The official code for Relational Context Learning for Human-Object Interaction Detection, CVPR2023.☆52Updated 2 years ago
- ☆28Updated last year
- Official code of ACM MM2024 paper- Unseen No More: Unlocking the Potential of CLIP for Generative Zero-shot HOI Detection☆24Updated last year
- CVPR 2023 Accepted Paper HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models☆68Updated last year
- [ICCV 2023] Official implementation of Memory-and-Anticipation Transformer for Online Action Understanding☆50Updated 2 years ago
- Disentangled Pre-training for Human-Object Interaction Detection☆26Updated 2 months ago
- MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge (ICCV 2023)☆30Updated 2 years ago
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆66Updated 2 years ago
- ☆20Updated last year
- Code for our IJCV 2023 paper "CLIP-guided Prototype Modulating for Few-shot Action Recognition".☆75Updated last year
- [ICLR 2024] FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition☆92Updated 10 months ago
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆78Updated last year
- Code for the paper "Detecting Any Human-Object Interaction Relationship: Universal HOI Detector with Spatial Prompt Learning on Foundatio…