andrewowens / multisensory
Code for the paper: Audio-Visual Scene Analysis with Self-Supervised Multisensory Features
☆221Updated 5 years ago
Alternatives and similar repositories for multisensory:
Users that are interested in multisensory are comparing it to the libraries listed below
- Unofficial Implementation of Google Deepmind's paper `Objects that Sound`☆83Updated 6 years ago
- 2.5D visual sound dataset☆97Updated 3 years ago
- Co-Separating Sounds of Visual Objects (ICCV 2019)☆94Updated last year
- Implementation for ECCV20 paper "Self-Supervised Learning of audio-visual objects from video"☆113Updated 4 years ago
- Deep Audio-Visual Embedding network (DAVEnet) implementation in PyTorch☆65Updated 6 years ago
- Torch code for using Residual Networks with LSTMs for Lipreading☆98Updated 6 years ago
- Learning to Separate Object Sounds by Watching Unlabeled Video (ECCV 2018)☆49Updated 5 years ago
- Listen to Look: Action Recognition by Previewing Audio (CVPR 2020)☆129Updated 3 years ago
- 2.5D visual sound☆114Updated last year
- Audio-Visual Event Localization in Unconstrained Videos, ECCV 2018☆181Updated 4 years ago
- AVSpeech downloader☆67Updated 6 years ago
- Audio-Visual Speech Recognition using Deep Learning☆60Updated 6 years ago
- TensorFlow implementation of "SoundNet".☆145Updated 7 years ago
- Include some core functions and model to handle speech separation☆155Updated 3 years ago
- Lip Reading in the Wild using ResNet and LSTMs in PyTorch☆58Updated 7 years ago
- Pytorch code for BMVC 2018 paper☆87Updated 5 years ago
- Wavenet Autoencoder for Unsupervised speech representation learning (after Chorowski, Jan 2019)☆175Updated 4 years ago
- Codebase and Dataset for the paper: Learning to Localize Sound Source in Visual Scenes☆90Updated 4 months ago
- ☆226Updated 5 years ago
- processing and extracting of face and mouth image files out of the TCDTIMIT database☆45Updated 4 years ago
- Supporting code for "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild"☆101Updated 5 years ago
- Implementation of "EPIC-Fusion: Audio-Visual Temporal Binding for Egocentric Action Recognition, ICCV, 2019" in PyTorch☆111Updated 4 years ago
- Localizing Visual Sounds the Hard Way☆79Updated 2 years ago
- Audio Visual Instance Discrimination with Cross-Modal Agreement☆128Updated 3 years ago
- MUSIC Dataset from The Sound of Pixels (ECCV '18)☆123Updated 2 years ago
- ☆58Updated 7 years ago
- Pytorch implementation of 'See, Hear, and Read: Deep Aligned Representations'☆33Updated 6 years ago
- Codebase for ECCV18 "The Sound of Pixels"☆378Updated 2 years ago
- Code for the Active Speakers in Context Paper (CVPR2020)☆54Updated 3 years ago
- Code for Discriminative Sounding Objects Localization (NeurIPS 2020)☆57Updated 3 years ago