google-research-datasets / eevLinks
The Evoked Expressions in Video dataset contains videos paired with the expected facial expressions over time exhibited by people reacting to the video content.
☆38Updated 3 years ago
Alternatives and similar repositories for eev
Users that are interested in eev are comparing it to the libraries listed below
Sorting:
- ☆31Updated 3 years ago
- RareAct: A video dataset of unusual interactions☆32Updated 4 years ago
- We present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances…☆47Updated 3 years ago
- What Can You Learn from Your Muscles? Learning Visual Representation from Human Interactions (https://arxiv.org/pdf/2010.08539.pdf)☆39Updated 4 years ago
- [BMVC 2022] This is the official code of our Paper "Revisiting Self-Supervised Contrastive Learning for Facial Expression Recognition"☆22Updated 11 months ago
- Code for the AVLnet (Interspeech 2021) and Cascaded Multilingual (Interspeech 2021) papers.☆52Updated 3 years ago
- This is an official pytorch implementation of Learning To Recognize Procedural Activities with Distant Supervision. In this repository, w…☆42Updated 2 years ago
- Self-Supervised Learning by Cross-Modal Audio-Video Clustering (NeurIPS 2020)☆90Updated 2 years ago
- PyTorch Implementation on Paper [CVPR2021]Distilling Audio-Visual Knowledge by Compositional Contrastive Learning☆88Updated 3 years ago
- ☆31Updated 4 years ago
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆40Updated 2 years ago
- AViD Dataset: Anonymized Videos from Diverse Countries☆56Updated 2 years ago
- [CVPR21] Visual Semantic Role Labeling for Video Understanding (https://arxiv.org/abs/2104.00990)☆60Updated 3 years ago
- ☆73Updated 3 years ago
- [AAAI 2023 (Oral)] CrissCross: Self-Supervised Audio-Visual Representation Learning with Relaxed Cross-Modal Synchronicity☆25Updated last year
- Official implementation of AdaMML. https://arxiv.org/abs/2105.05165.☆51Updated 3 years ago
- Code for Look for the Change paper published at CVPR 2022☆36Updated 2 years ago
- Code for "Compositional Video Synthesis with Action Graphs", Bar & Herzig et al., ICML 2021☆32Updated 2 years ago
- ☆22Updated 2 years ago
- ☆84Updated last year
- An Identity-free Video Dataset for Micro-Gesture Understanding and Emotion Analysis (CVPR'21)☆44Updated 2 years ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆45Updated last year
- 💭 Intentonomy: towards Human Intent Understanding [CVPR 2021]☆37Updated 3 years ago
- Released code and data for "Frame-Transformer Emotion Classification Network." ICMR 2017☆17Updated 8 years ago
- ☆22Updated last year
- EgoCom: A Multi-person Multi-modal Egocentric Communications Dataset☆57Updated 4 years ago
- Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.☆48Updated 4 years ago
- ☆54Updated 3 years ago
- Rank-aware Attention Network from 'The Pros and Cons: Rank-aware Temporal Attention for Skill Determination in Long Videos'☆29Updated 4 years ago
- Code accompanying EGO-TOPO: Environment Affordances from Egocentric Video (CVPR 2020)☆31Updated 2 years ago