DmitryRyumin / FG-2024-PapersLinks
FG 2024 Papers: Explore a comprehensive collection of research papers presented at one of the premier conferences on automatic face and gesture recognition. Seamlessly integrate code implementations for better understanding. ⭐ Experience the cutting edge of progress in facial analysis, gesture recognition, and biometrics with this repository!
☆14Updated last year
Alternatives and similar repositories for FG-2024-Papers
Users that are interested in FG-2024-Papers are comparing it to the libraries listed below
Sorting:
- Multimodal Empathetic Chatbot☆41Updated last year
- Graph learning framework for long-term video understanding☆65Updated 3 weeks ago
- [CVPR 2023] Official code repository for "How you feelin'? Learning Emotions and Mental States in Movie Scenes". https://arxiv.org/abs/23…☆56Updated 9 months ago
- [Interspeech 2024] SyncVSR: Data-Efficient Visual Speech Recognition with End-to-End Crossmodal Audio Token Synchronization☆56Updated 4 months ago
- [CVPR] MARLIN: Masked Autoencoder for facial video Representation LearnINg☆254Updated 4 months ago
- [WACV 2024] FG-Net: Facial Action Unit Detection with Generalizable Pyramidal Features☆26Updated last year
- Official code for the paper "GestSync: Determining who is speaking without a talking head" published at BMVC 2023☆46Updated 11 months ago
- Official implementation of the NeurIPS2023 paper: Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial Expression Recognit…☆28Updated last year
- [BMVC'23] Prompting Visual-Language Models for Dynamic Facial Expression Recognition☆127Updated 8 months ago
- The official implementation of the paper "Affective Faces for Goal-Driven Dyadic Communication."☆14Updated 2 years ago
- Implementation for the paper "Can Language Models Learn to Listen?"☆65Updated last year
- [Information Fusion 2024] HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition☆111Updated 9 months ago
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆58Updated 3 months ago
- Implementation of the paper: "Audio Mamba: Bidirectional State Space Model for Audio Representation Learning" in pytorch☆13Updated this week
- PyTorch Implementation of Attention Prompt Tuning: Parameter-Efficient Adaptation of Pre-Trained Models for Action Recognition☆15Updated last year
- GPT-4V with Emotion☆93Updated last year
- This repository is for The Power of Sound(TPoS): Audio Reactive Video Generation with Stable Diffusion (ICCV2023)☆23Updated last year
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editing☆69Updated last year
- [NeurIPS 2022] The official repository of Expression Learning with Identity Matching for Facial Expression Recognition☆43Updated last year
- A curated list of facial expression recognition in both 7-emotion classification and affect estimation.☆273Updated 4 months ago
- ☆40Updated last year
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆50Updated 7 months ago
- [WACV 2024] Code release for "VEATIC: Video-based Emotion and Affect Tracking in Context Dataset"☆15Updated last year
- ☆53Updated last month
- Official implementation of FaceXFormer: A Unified Transformer for Facial Analysis☆274Updated 4 months ago
- Algorithms for Intelligent Assessment of Human Personality Traits based on His Multimodal Data for ranking potential candidates to perfo…☆44Updated 7 months ago
- This is the repo for the paper "PANGEA: A FULLY OPEN MULTILINGUAL MULTIMODAL LLM FOR 39 LANGUAGES"☆110Updated last month
- ICASSP2024: Adaptive Super Resolution For One-Shot Talking-Head Generation☆180Updated last year
- A curated list of resources of audio-driven talking face generation☆141Updated 2 years ago
- The open source implementation of the cross attention mechanism from the paper: "JOINTLY TRAINING LARGE AUTOREGRESSIVE MULTIMODAL MODELS"☆32Updated last year