aimagelab / awesome-human-visual-attentionLinks
This repository contains a curated list of research papers and resources focusing on saliency and scanpath prediction, human attention, human visual search.
☆58Updated 6 months ago
Alternatives and similar repositories for awesome-human-visual-attention
Users that are interested in awesome-human-visual-attention are comparing it to the libraries listed below
Sorting:
- ☆13Updated 8 months ago
- Official codebase for "Gazeformer: Scalable, Effective and Fast Prediction of Goal-Directed Human Attention" (CVPR 2023)☆39Updated last year
- ☆84Updated 2 years ago
- Official code repo for TCLR: Temporal Contrastive Learning for Video Representation [CVIU-2022]☆39Updated last year
- ☆23Updated last year
- ☆57Updated 3 years ago
- Official Repository for VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning☆24Updated last year
- Composed Video Retrieval☆61Updated last year
- Code for CVPR2023 paper "Collaborative Noisy Label Cleaner: Learning Scene-aware Trailers for Multi-modal Highlight Detection in Movies"☆17Updated 2 years ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆100Updated last year
- [CVPR 2023 & IJCV 2025] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆64Updated 3 months ago
- Official repository for "Video-FocalNets: Spatio-Temporal Focal Modulation for Video Action Recognition" [ICCV 2023]☆100Updated last year
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆54Updated 11 months ago
- [NeurIPS2022] Mind Reader: Reconstructing complex images from brain activities☆62Updated 2 years ago
- (CVPR 2023) Official implemention of the paper "Weakly Supervised Video Representation Learning with Unaligned Text for Sequential Videos…☆31Updated last year
- Official repository for "Self-Supervised Video Transformer" (CVPR'22)☆107Updated last year
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆67Updated last year
- ICLR 2023 DeCap: Decoding CLIP Latents for Zero-shot Captioning☆137Updated 2 years ago
- [BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for…☆29Updated 8 months ago
- [ICLR 2024] FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition☆91Updated 9 months ago
- Learning Bottleneck Concepts in Image Classification (CVPR 2023)☆41Updated 2 years ago
- Official implementation of "Everything at Once - Multi-modal Fusion Transformer for Video Retrieval." CVPR 2022☆114Updated 3 years ago
- ☆68Updated last year
- Official code repo of PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs☆26Updated 9 months ago
- [CVPR2023] Context De-confounded Emotion Recognition☆18Updated 2 years ago
- [WACV 2024] Code release for "VEATIC: Video-based Emotion and Affect Tracking in Context Dataset"☆16Updated 2 months ago
- [NeurIPS'24 Spotlight] GAIA: Rethinking Action Quality Assessment for AI-Generated Videos☆35Updated 7 months ago
- [ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation☆64Updated 3 months ago
- Actor-agnostic Multi-label Action Recognition with Multi-modal Query [ICCVW '23]☆24Updated 2 years ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆35Updated 2 years ago