rohitgajawada / Where-are-they-looking-PyTorchLinks
Where are they looking? - Gaze Following via Attention modelling and Deep Learning
☆36Updated 6 years ago
Alternatives and similar repositories for Where-are-they-looking-PyTorch
Users that are interested in Where-are-they-looking-PyTorch are comparing it to the libraries listed below
Sorting:
- Code for ACCV2018 paper 'Believe It or Not, We Know What You Are Looking at!'☆110Updated 4 years ago
- Code for Gaze-Following in video☆48Updated 8 years ago
- Contextual Attention for Hand Detection in the Wild, ICCV 2019☆145Updated 2 years ago
- pytorch implementation of MultiPoseNet (ECCV 2018, Muhammed Kocabas et al.)☆195Updated 6 years ago
- Code for the Pose Residual Network introduced in 'MultiPoseNet: Fast Multi-Person Pose Estimation using Pose Residual Network' paper htt…☆286Updated 4 years ago
- Evaluation of multi-person pose estimation and tracking☆221Updated 5 years ago
- Codes for popular action recognition models, verified on the something-something data set.☆245Updated 6 years ago
- Code for the Gaze360: Physically Unconstrained Gaze Estimation in the Wild Dataset☆254Updated 3 years ago
- A curated list of awesome gaze estimation frameworks, datasets and other awesomeness.☆68Updated 4 months ago
- Finding Tiny Faces in PyTorch☆166Updated last week
- Chainer implementation of Pose Proposal Networks☆117Updated 6 years ago
- Motion Fused Frames implementation in PyTorch, codes and pretrained models.☆132Updated last year
- Video Platform for Action Recognition and Object Detection in Pytorch☆223Updated 3 years ago
- Real-time Action detection demo for the work Actor Conditioned Attention Maps. This repo includes a complete pipeline for person detectio…☆152Updated 2 years ago
- Web browser based demo of OpenPifPaf.☆96Updated 2 years ago
- Gaze-Tracking based on head orientation and eye orientation☆66Updated 6 years ago
- An MXNet implementation of Fine-Grained Head Pose☆49Updated 7 years ago
- Preprocessing tools for Google AVA Dataset☆49Updated 7 years ago
- STEP: Spatio-Temporal Progressive Learning for Video Action Detection. CVPR'19 (Oral)