alibaba-mmai-research / CLIP-FSARLinks
Code for our IJCV 2023 paper "CLIP-guided Prototype Modulating for Few-shot Action Recognition".
☆64Updated last year
Alternatives and similar repositories for CLIP-FSAR
Users that are interested in CLIP-FSAR are comparing it to the libraries listed below
Sorting:
- Code for our CVPR 2023 paper "MoLo: Motion-augmented Long-short Contrastive Learning for Few-shot Action Recognition".☆45Updated last year
- [ICCV'23] Official PyTorch implementation for paper "Exploring Predicate Visual Context in Detecting Human-Object Interactions"☆81Updated last year
- The official repository for ICLR2024 paper "FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition"☆83Updated 6 months ago
- Official code of ACM MM2024 paper- Unseen No More: Unlocking the Potential of CLIP for Generative Zero-shot HOI Detection☆23Updated 11 months ago
- [AAAI'25]: Building a Multi-modal Spatiotemporal Expert for Zero-shot Action Recognition with CLIP☆15Updated 5 months ago
- [ICCV 2023] Official implementation of Memory-and-Anticipation Transformer for Online Action Understanding☆45Updated last year
- Code for our CVPR 2022 Paper "Hybrid Relation Guided Set Matching for Few-shot Action Recognition".☆24Updated 2 years ago
- Code for our paper "HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition".☆13Updated 2 years ago
- Video Test-Time Adaptation for Action Recognition (CVPR 2023)☆48Updated 9 months ago
- CVPR 2023 Accepted Paper HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models☆67Updated last year
- MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge (ICCV 2023)☆30Updated last year
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆41Updated last year
- 【CVPR'24】OST: Refining Text Knowledge with Optimal Spatio-Temporal Descriptor for General Video Recognition☆38Updated last year
- ☆81Updated 2 years ago
- [ Arxiv 2023 ] This repository contains the code for "MUPPET: Multi-Modal Few-Shot Temporal Action Detection"☆15Updated last year
- ☆26Updated last year
- ☆39Updated last year
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆120Updated 2 years ago
- [CVPR 2023] Official PyTorch implementation of the paper "GAP: Post-Processing Temporal Action Detection"☆18Updated last year
- [ICCV'2023 Oral] Implicit Temporal Modeling with Learnable Alignment for Video Recognition☆37Updated last year
- (CVPR2024) Realigning Confidence with Temporal Saliency Information for Point-level Weakly-Supervised Temporal Action Localization☆19Updated last year
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆63Updated 10 months ago
- [CVPR 2022] Official Pytorch Implementation for "Spatio-temporal Relation Modeling for Few-shot Action Recognition". SOTA Results for Few…☆99Updated 2 years ago
- ☆117Updated last year
- BEAR: a new BEnchmark on video Action Recognition☆44Updated last year
- ☆17Updated last year
- [CVPR 2022] Fine-grained Temporal Contrastive Learning for Weakly-supervised Temporal Action Localization☆48Updated last year
- [ECCV 2022] Official Pytorch Implementation of the paper : " Zero-Shot Temporal Action Detection via Vision-Language Prompting "☆106Updated last year
- [ECCV 2022] Dual-Evidential Learning for Weakly-supervised Temporal Action Localization☆44Updated last year
- [AAAI2023] Revisiting the Spatial and Temporal Modeling for Few-shot Action Recognition (SloshNet)☆13Updated last year